{ Josh Rendek }

<3 Go & Kubernetes

If you’re using ox-hugo and want to have it generate markdown files with the date prefixing the markdown title you can use this snippet to do it on save with org-hugo-auto-export-mode turned on.

 1(defun ox-date-slug-prop ()
 2  (interactive)
 3  (let ((dt (format-time-string "%Y-%m-%d" (apply #'encode-time (org-parse-time-string (org-entry-get (point) "EXPORT_DATE")))))
 4        (slug (org-hugo-slug (org-get-heading :no-tags :no-todo))))
 5    (org-set-property "EXPORT_FILE_NAME" (format "%s-%s" dt slug))))
 6
 7(defun my-setup-hugo-auto-export ()
 8  "Set up an advice to call `my-ox-date-slug-before-save' before `org-hugo-auto-export'."
 9  (advice-add 'org-hugo-export-wim-to-md :before #'ox-date-slug-prop))
10
11(my-setup-hugo-auto-export)

Having a useable logging and metrics stack for your hobby projects can be extremely expensive if you stick them inside your kubernetes cluster or try and host them on a normal VPS provider (whether that means DigitalOcean or AWS).

Below is an example configuration I use for some hobby projects that uses a dedicated hosting provider (OVH).

This solves two main problems for me: hosting it securely (not exposing anything other than SSH) and having a beefy enough box to run elastic search and apm.

This is a small setup script that locks down the logging server to only allow SSH and installs the ELK stack,

On the logging server:

 1wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
 2sudo apt-get install apt-transport-https
 3echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list
 4sudo apt-get update && sudo apt-get install elasticsearch kibana
 5vi /etc/elasticsearch/jvm.options
 6service elasticsearch start
 7ufw default deny incoming
 8ufw default allow outgoing
 9ufw allow ssh
10ufw enable

Next, we need to download and apply the filebeat and metric beat configs,

On the kubernetes cluster:

1curl -L -O https://raw.githubusercontent.com/elastic/beats/6.0/deploy/kubernetes/filebeat-kubernetes.yaml
2curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10/deploy/kubernetes/metricbeat-kubernetes.yaml
3kubectl apply -f filebeat-kuberentes.yaml
4kubectl apply -f metricbeat-kubernetes.yaml

Then we’ll need to create a SSH key to get into the logging server. You can create the secret with a given ssh private key,

Locally:

1kubectl create secret generic logging-ssh-key --from-file=id_rsa=logging_ssh_key

Install the APM server,

On the kubernetes cluster:

1helm repo add elastic https://helm.elastic.co
2helm install apm-server --version 7.10 elastic/apm-server

Create the below as a yaml file manifest and apply it with kubectl apply -f filename.yaml.

  1. Setup a config map with a startup script that will port-forward 9200 over SSH to the logging server
  2. Deploy a service into the cluster to allow local services to talk to it
  3. Setup health checks and liveness probes to restart the pod if the SSH connection gets interrupted
  4. Mounts the SSH key for the pod to connect from a secret
 1# get ssh key from logging
 2apiVersion: v1
 3kind: ConfigMap
 4metadata:
 5  name: "logging-ssh-forwarder-script"
 6data:
 7  start.sh: |
 8    #!/bin/sh
 9    apk add --update openssh-client curl
10    mkdir ~/.ssh
11    ssh-keyscan -H logging.exmaple.com >> ~/.ssh/known_hosts
12    ssh -i /etc/ssh-key/id_rsa -N -o GatewayPorts=true -L 9200:0.0.0.0:9200 [email protected]    
13---
14apiVersion: v1
15kind: Service
16metadata:
17  name: logging-forwarder
18spec:
19  selector:
20    run: logging-forwarder
21  ports:
22    - protocol: TCP
23      port: 9200
24---
25kind: Deployment
26apiVersion: apps/v1
27metadata:
28  name: filebeat-ssh-forwarder
29spec:
30  selector:
31    matchLabels:
32      run: logging-forwarder
33  replicas: 1
34  template:
35    metadata:
36      labels:
37        run: logging-forwarder
38    spec:
39      containers:
40        - name: forwarder
41          image: alpine:latest
42          command:
43            - "/start"
44          ports:
45            - containerPort: 9200
46          livenessProbe:
47            exec:
48              command:
49                - curl
50                - localhost:9200
51          readinessProbe:
52            exec:
53              command:
54                - curl
55                - localhost:9200
56          volumeMounts:
57            - name: ssh-key-volume
58              mountPath: "/etc/ssh-key"
59            - name: logging-ssh-forwarder-script
60              mountPath: /start
61              subPath: start.sh
62      volumes:
63        - name: logging-ssh-forwarder-script
64          configMap:
65            name: logging-ssh-forwarder-script
66            defaultMode: 0755
67        - name: ssh-key-volume
68          secret:
69            secretName: logging-ssh-key
70            defaultMode: 256

Lastly you’ll need to change the APM server to point to our new service,

On the kubernetes cluster:

1kubectl edit cm apm-server-apm-server-config

You can now connect to your ELK stack and view APM metrics and other logs flowing into your cluster:

1ssh -L 5601:127.0.0.1:5601 [email protected]

And open your browser to http://localhost:5601

Here is an example of the APM dashboard in Kibana under Observability -> Overview

And that’s it. Make sure you setup index policies to rotate large indexes so the disks don’t get full.

Go Buffalo: Adding a 2nd database

Sep 4, 2020 - 1 minutes

If you need to connect to multiple databases in your buffalo app open up your models/models.go file:

Up at the top add a new var like:

1var YourDB *pop.Connection

then in the init() func you can connect to it - the important part is to make sure you call .Open:

1	YourDB, err = pop.NewConnection(&pop.ConnectionDetails{
2		Dialect: "postgres",
3		URL:     envy.Get("EXTRA_DB_URL", "default_url_here"),
4	})
5	if err != nil {
6		log.Fatal(err)
7	}

That’s it! You can now connect to a 2nd database from within your app.

We’ll go over everything needed to get a small development environment up and running using code-server, buffalo and postgres for a remote dev environment.

First lets install Go and Buffalo with gofish:

1apt-update
2curl -fsSL https://raw.githubusercontent.com/fishworks/gofish/master/scripts/install.sh | bash
3gofish init
4gofish install go 
5gofish install buffalo
6buffalo version # should say 0.16.12 or whatever latest is

Install docker

curl -fsSL get.docker.com | bash

Install NodeJS & Yarn:

curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt-get update && sudo apt-get install yarn

Install code-server

curl -fsSL https://code-server.dev/install.sh | sh

Add your user:

sudo usermod -aG docker YOURUSER
# you made need to reboot after this step to get your user to talk to the docker daemon properly

Start code-server, as your regular user (ie: not sudo):

systemctl --user enable --now code-server

Edit your config only do this if its on a private, trusted network, don’t do this on an internet exposed server

.config/code-server/config.yaml:

1bind-addr: 0.0.0.0:8080
2auth: none
3password: xxxx
4cert: false

Restart the server:

1systemctl --user restart code-server

And finally let’s setup postgres:

1# for the server, this is quickest/easiest
2docker run --name postgres -e POSTGRES_PASSWORD=postgres -p 5432:5432 --restart=always -d postgres
3# for client
4sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
5wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
6sudo apt-get update
7sudo apt-get install postgresql-client-12

You can now psql to your app from the terminal:

postgres://postgres:[email protected]:5432/demo_test?sslmode=disable

Let’s create our demo buffalo app:

1buffalo new demo
2cd demo

Edit your database.yml to look like this for development:

1---
2development:
3  url: {{envOr "DEV_DATABASE_URL" "postgres://postgres:[email protected]:5432/demo_test?sslmode=disable"}}

Then you can create and migrate your database:

1buffalo db create && buffalo db migrate

Now just run buffalo dev in your VS Code terminal and you can browse your app and start working on it.

If you want other things like the clipboard to work properly, I’d suggest setting up a proxy with auth and a real SSL certificate, for example using Traefik or Caddy.

When you setup your cloud9 IDE for the first time, it comes pre-installed with go1.9 - if you’d like to update to the latest (as of this writing), just run the following commands:

1wget https://golang.org/dl/go1.14.6.linux-amd64.tar.gz
2sudo tar -C /usr/local -xzf ./go1.14.6.linux-amd64.tar.gz
3mv /usr/bin/go /usr/bin/go-old # move the old binary

Edit your .bashrc file so your $PATH has the new location:

1export PATH=$PATH:$HOME/.local/bin:$HOME/bin:/usr/local/go/bin

Source the file again so your settings are reloaded:

1source ~/.bashrc

And now go version should show 1.14.6

Now lets install Buffalo with gofish:

1curl -fsSL https://raw.githubusercontent.com/fishworks/gofish/master/scripts/install.sh | bash
2gofish init
3gofish install buffalo
4buffalo version # should say 0.16.12 or whatever latest is

And finally let’s setup postgres:

 1sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
 2wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
 3sudo apt-get update
 4sudo apt-get install postgresql-12
 5sudo su - postgres 
 6psql
 7postgres=# create role ubuntu with superuser createdb login password 'ubuntu'; 
 8postgres=# create database ubuntu;
 9postgres=# exit
10exit

You should now be to execute psql from your regular user.

Let’s create our demo buffalo app:

1buffalo new demo
2cd demo

Edit your database.yml to look like this for development:

1---
2development:
3  url: {{envOr "DEV_DATABASE_URL" "postgres://ubuntu:ubuntu@/demo_development"}}

Then you can create and migrate your database:

1buffalo db create && buffalo db migrate

Now just run buffalo dev in your Cloud9 terminal and you can preview your application! (Cloud9 already sets the PORT env var).

And then you realize that auto-complete support doesn’t work with a small little

This feature is in an experimental state for this language. It is not fully implemented and is not documented or supported.

on the AWS product page, and go back to IntelliJ.

15 Years of Remote Work

Mar 22, 2020 - 6 minutes

In one form or another I’ve worked remotely since 2005 (with a brief mix of remote/onsite for about 2 years).

Below is a checklist of the things I’ve found that are required to have a succesful remote work life and ensure you join a great remote team.

Why I like working remote

If it’s going to be your first time working remote, you should figure out why you want to do it.

I get to have my own space

I prefer having my own area to work. I have surround sound, an entire L desk spanning a good chunk of a wall, and plenty of desk space. Headphones bother my ears so this lets me listen to music as loud as I want, have the temperature at what I want and setup my work area with personal knick knacks (without having to worry about them vanishing).

One of the important things is to have a dedicated working area - whether thats a room in your house or even renting a co-working space (I’ve had team-mates do that).

Creating your own routine

Sign-in to chat, say hello, then go make some coffee. Figure out a pattern that works for you - you probably don’t have far to walk to get to your desk. There’s no rush to beat traffic or hit up your coffee shop on the way in. One of my favorite things about being at home is being able to make my lunch fresh whenever I want (usually I do meal-prep for a few days) - some offices will have a kitchen, but it’s not the same as having your own pots, pans and knives.

Figure out how you like to work

Since you have a dedicated work space you can work how you like. Depending on my mood, I’ll have anything from Spotify going, to listening to cooking shows in the background, to having re-runs on Netflix for background noise. I’ve had friends also do things like have CSPAN and other news networks on for background noise.

Don’t waste time traveling

I worked on a hybrid team recently, and some people in NYC would spend hours traveling. That’s hours of your life gone sitting in a car or train.

Not having to sit in traffic (even if only a short way from home) is still an extra stress you don’t have to worry about at the end of the day.

Pets!

When my previous dog Spunky was going through chemo and cancer surgeries, it was such a relief to be able to be at home with him. This isn’t something I would’ve been able to do at a regular office job.

Currently I have two golden retrievers and a lhasa - being able to reach down and pet your dogs or step outside to play with them is amazing. When the weather is nice I’ll even grab a lawn chair and sit outside to work and play fetch until the golden decides to bury the ball in the yard.

No open office plans

Open office plans are awful, and you get to avoid them working from home! Unfortunately one company I worked at with a hybrid setup had an open office plan and a ping pong table making it almost impossible to hear co-workers when pairing or meeting unless they went into a seperate conference room.

You have a strong local network of friends

You’ll be working remotely - you need to make sure you still get out and interact with people. Having a good network of friends locally helps give you that social fix you may need by not having officemates.

Things to look for in remote companies

Now that you know why you want to work remotely, you need to know what to look for in companies.

Make sure a good portion of the company is actually remote

If you’re joining a smaller team within the company, it’s okay to compromise if the team is fully remote as well. The important aspect here is that there are expectations around communicating with people in the open (ie: no water cooler or office hallway chats).

How do they communicate?

You want to make sure theres a mix of online/offline and face-to-face communication. Online chats like Slack/Mattermost/Hipchat solve the immediate/quick conversations. Email is generally what I’ve used for offline communication.

Video chat is important and can help make explaining things and clarifying designs a lot easier than text. I’ve used Zoom and Google Meet/Hangouts in the past (Zoom being my preference).

Do they meetup?

I hate flying but its one of the things I’ve compromised on. Ensure your company (or team) if fully remote is meeting up at least once a year. It’s very important to build work friendships and rapport with your team-mates. My preference is twice a year but that will vary person-to-person.

Transparency

Find a company that is transparent about roadmap, customers, where you’re taking the product, etc. Having open communication with the product management team is super important and knowing where goals are being tracked and how far along they are is very useful when working remote.

What type of work / planning system are they using?

It’s a good idea to figure out if they’re using Kanban/Scrum or some other variation (turns out no one does scrum or agile exactly the same). The important thing is you understand how they operator, what pieces of scrum (or whatever they’re using) are being utilized, and expectations for things like demo days.

Do they pay market rates?

I’ve seen some companies try and negotiate salary down based on cost of living for where you live. Don’t ever accept this - your talent is worth the same remotely as it is in person in a large city, even more so since you’ll be more productive.

Do they talk about things other than work?

It’s nice to chat about things outside of work to get to know your co-workers, examples from my past have included channels centered around food/recipes, gardening, woodworking, etc. Some place where people can share outside interests without polluting main work channels.

Working, tools, people

Ensure you have a good internet connection

It’s very important to have a solid internet connection. It’s very hard to hear or pair with people when the video looks like pixelated garbage. I used to even have a backup internet line as well (unfortunately new area doesn’t have multiple providers).

Don’t be afraid to hop on a hangout/video chat

Sometimes its easier to go through a code review or work through a problem over video and screen share than it is chat.

Be respectful to your co-workers when video/hang-outing

I don’t mean the obvious things here like being courteous and respectful.

It’s really important to be aware of your surroundings. It may work amazing for you to be in a coffee shop and around people with some music in the background.

When it’s time to meet with people over video chat though, ensure you can re-locate to a quiet space for meetings/pairing. Trying to talk and listen with background music and chatter in a coffee / sandwhich shop is a very unpleasant experience for the people on the other end.

Don’t mis-interpret things

Text can sometimes seem aggressive if not phrased properly - if you’ve met or know your team mates, remember 99.9999% of the time they probably aren’t trying to be hostile. Going over code reviews or other in depth architecture discussions over hangouts and video chat can help alleviate this. The same way text messages can be misinterpreted spills over into work chats as well since everything is text - keep an open mind and be flexible.

Here’s a sample application to show how to stitch together Buffalo, gqlgen and graphql subscriptions. Github Repo

I’ll go over the important parts here. After generating your buffalo application you’ll need a graphql schema file and a gqlgen config file:

1# schema.graphql
2type Example {
3	message: String
4}
5
6
7type Subscription {
8    exampleAdded: Example!
9}

and your config file:

 1# gqlgen.yml
 2struct_tag: json
 3schema:
 4- schema.graphql
 5exec:
 6  filename: exampleql/exec.go
 7  package: exampleql
 8model:
 9  filename: exampleql/models.go
10  package: exampleql
11resolver:
12  filename: exampleql/resolver.go
13  type: Resolver

Next lets generate our graphql files:

1go run github.com/99designs/gqlgen --verbose

Now we can open up our resolver.go file and add a New method to make creating the handler easier:

1func New() Config {
2	return Config{
3		Resolvers: &Resolver{},
4	}
5}

Let’s also add our resolver implementation:

 1func (r *subscriptionResolver) ExampleAdded(ctx context.Context) (<-chan *Example, error) {
 2	msgs := make(chan *Example, 1)
 3
 4	go func() {
 5		for {
 6			msgs <- &Example{Message: randString(50)}
 7			time.Sleep(1 * time.Second)
 8		}
 9	}()
10	return msgs, nil
11}
12
13var letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
14
15func randString(n int) *string {
16	b := make([]rune, n)
17	for i := range b {
18		b[i] = letterRunes[rand.Intn(len(letterRunes))]
19	}
20	s := string(b)
21	return &s
22}

Inside your app.go file you’ll need to add a few handlers and wrap them in buffalo’s handler as well:

 1
 2c := cors.New(cors.Options{
 3	AllowedOrigins:   []string{"http://localhost:3000"},
 4	AllowCredentials: true,
 5})
 6
 7srv := handler.New(exampleql.NewExecutableSchema(exampleql.New()))
 8srv.AddTransport(transport.POST{})
 9srv.AddTransport(transport.Websocket{
10	KeepAlivePingInterval: 10 * time.Second,
11	Upgrader: websocket.Upgrader{
12		CheckOrigin: func(r *http.Request) bool {
13			return true
14		},
15	},
16})
17
18app.ANY("/query", buffalo.WrapHandler(c.Handler(srv)))
19
20app.GET("/play", buffalo.WrapHandler(playground.Handler("Example", "/query")))

Now if you head over to the playground and run this query:

1subscription {
2  exampleAdded {
3    message
4  }
5}

You should see something like this scrolling by:

  • This is not production ready code
  • If you were doing something with multiple load balanced nodes you should be using something like Redis or NATs pubsub to handle messaging
  • This isn’t cleaning up channels or doing anything that you should be doing for live code

If you’ve been using helm you’ve inevitably run into a case where a

1helm upgrade --install

has failed and helm is stuck in a FAILED state when you list your deployments.

Try and make sure any old pods are cleared up (ie: if they’re OutOfEphemeralStorage or something other error condition).

Next to get around this without doing a helm delete NAME --purge:

1helm rollback NAME REVISION

This should hopefully go away in Helm 3.

This is useful if you’re building a generic library/package and want to let people pass in types and convert to them/return them.

 1package main
 2
 3import (
 4	"encoding/json"
 5	"fmt"
 6	"reflect"
 7)
 8
 9type Monkey struct {
10	Bananas int
11}
12
13func main() {
14	deliveryChan := make(chan interface{}, 1)
15	someWorker(&Monkey{}, deliveryChan)
16	monkey := <- deliveryChan
17	fmt.Printf("Monkey: %#v\n", monkey.(*Monkey))
18}
19
20func someWorker(inputType interface{}, deliveryChan chan interface{}) {
21	local := reflect.New(reflect.TypeOf(inputType).Elem()).Interface()
22	json.Unmarshal([]byte(`{"Bananas":20}`), local)
23	deliveryChan <- local
24}

Line 22 should be using whatever byte array your popping off a MQ or stream or something else to send back.

Helm not updating image, how to fix

Jul 16, 2018 - 1 minutes

If you have your imagePullPolicy: Always and deploys aren’t going out (for example if you’re using a static tag, like stable) - then you may be running into a helm templating bug/feature.

If your helm template diff doesn’t change when being applied the update won’t go out, even if you’ve pushed a new image to your docker registry.

A quick way to fix this is to set a commit sha in your CICD pipeline, in GitLab for example this is $CI_COMMIT_SHA.

If you template this out into a values.yaml file and add it as a label on your Deployment - when you push out updates your template will be different from the remote, and tiller and helm will trigger an update provided you’ve set it properly, for example:

1script:
2    - helm upgrade -i APP_NAME -i --set commitHash=$CI_COMMIT_SHA