<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[rob salmond]]></title><description><![CDATA[all wrong
all day long]]></description><link>https://rob.salmond.ca/</link><generator>Ghost 0.11</generator><lastBuildDate>Fri, 09 Jan 2026 13:52:02 GMT</lastBuildDate><atom:link href="https://rob.salmond.ca/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[this is how i terraform]]></title><description><![CDATA[<p>I started using <a href="https://github.com/hashicorp/terraform">terraform</a> at work in 2016 . In 2018 I wrote a talk called "7 Ways Terraform Will Kill You" which I mothballed because I was certain every one of the gotchas I wanted to highlight would soon be fixed in an upcoming release. Many of them were addressed</p>]]></description><link>https://rob.salmond.ca/this-is-how-i-terraform/</link><guid isPermaLink="false">c239c19f-c8b4-4268-bf76-61d77b162f30</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Thu, 19 Dec 2024 12:32:44 GMT</pubDate><content:encoded><![CDATA[<p>I started using <a href="https://github.com/hashicorp/terraform">terraform</a> at work in 2016 . In 2018 I wrote a talk called "7 Ways Terraform Will Kill You" which I mothballed because I was certain every one of the gotchas I wanted to highlight would soon be fixed in an upcoming release. Many of them were addressed in 2019 with the major language overhaul of <a href="https://github.com/hashicorp/terraform/blob/v0.12/CHANGELOG.md#0120-may-22-2019">terraform 0.12</a>, but I probably should have given that talk. I'm sure it would have helped someone.</p>

<p>I say all this to say that in the roughly nine years that I have been using terraform to manage large infrastructure projects I have banged my shins on many sharp corners and the bruises have left strong opinions.</p>

<p>If you already have strong opinions about terraform, <strong>this post is not for you</strong>. Your approach is great, keep using it. My opinion doesn't matter. Don't read another word. Go see what's happening <a href="https://hachyderm.io/explore">on mastodon</a> or something. </p>

<blockquote>
  <p><strong>If you do not have strong opinions</strong> about terraform and are just getting started, <strong>you are my target audience</strong>.</p>
</blockquote>

<p><strong>Assumptions</strong>: everything here is based on the assumption that you're using terraform for something <strong>long term</strong> and <strong>important</strong>. If you just want to get something done quickly, or the thing you're working on can be burned down and recreated without causing problems, none of this applies to you.</p>

<p>(aside: all this applies to <a href="https://opentofu.org/">opentofu</a> as well).</p>

<h4 id="rule1complexityistheenemy">Rule 1. Complexity is the enemy</h4>

<p>In a regular software project one can optimize for any number of things; memory or CPU efficiency, ability to easily refactor and add features, correctness of the implementation (eg. rigorous tests) and so on. With a nontrivial terraform project you should optimize for one thing: <strong>ability to reason about the project</strong>.</p>

<blockquote>
  <p>It is incredibly easy to build a terraform project which is extremely difficult to reason about.</p>
</blockquote>

<p>Yes the plan helps but do NOT believe the plan! Apply is all that matters and <code>plan != apply</code>. If you have not yet seen a <strong>plan that looked good</strong> which turned into an <strong>apply that went bad</strong> don't worry, you will.</p>

<p>But before you can even see the plan you have to implement your change. You have to look at the existing codebase, decide how to structure your change and what needs to be modified, and execute. In a nontrivial terraform codebase this can be a daunting task!</p>

<p>Sources of complexity are numerous:</p>

<ul>
<li>layers of indirection (eg. nested modules)
<ul><li>Speaking of which, do not nest modules.</li></ul></li>
<li>disparate inputs (eg. env vars, tfvars files)</li>
<li>complex logic</li>
</ul>

<p>At every turn, with every PR, you must push back on anything that increases complexity. When you inevitably cave and add another layer of indirection or piece of logic, leave a comment that <a href="https://blog.codinghorror.com/code-tells-you-how-comments-tell-you-why/">explains <em>why</em></a>.</p>

<p>The comment audience is your future self, who will have long forgotten what you were thinking.</p>

<blockquote>
  <p>The fewer places you have to look to figure out what's happening, the better.</p>
</blockquote>

<h4 id="dontthinkofitasiac">Don't think of it as "IaC"</h4>

<p>The problem with the phrase "Infrastructure as Code" is in the word "code". As soon as you call it code people start to cargo cult in all these software engineering principles that have NO BUSINESS in a terraform project (see rule 1). If you want to control your infrastructure with actual code go use Pulumi or AWS CDK and implement an <code>AbstractLoadBalancerFactoryBaseClass()</code> or whatever.</p>

<blockquote>
  <p>Terraform is "infrastructure as config files".</p>
</blockquote>

<p>Sure, go ahead and write reusable modules. Add loops. Use conditionals. But do so sparingly. It is much easier to reason about 27 <code>github_repo</code> resources with slightly different configs than one module called 27 times or worse, looping over one module using a list that contains 27 data structures representing each repo config.</p>

<h4 id="dontusecommunitymodules">Don't use community modules</h4>

<p>A terraform module is an opinion about how to do things with respect to some group of resources. The problem with community modules is that they need to support the <em>many</em> opinions present in the community, making them more complex by necessity (see rule 1).</p>

<blockquote>
  <p>Adam Jacob brilliantly explains this using what he calls the <strong>"200% knowledge problem"</strong> <a href="https://youtu.be/5lPa2U239C4?t=2028">in this talk</a> (deep link to the specific moment, just watch for 40 seconds).</p>
</blockquote>

<p>Read them for inspiration and then write your own, with fewer resources and using fewer variables and conditions. Instead of swallowing community opinions wholesale, capture the opinions of your organization within your own modules. A module should contain opinions like "This ALB configuration suits the needs of our app" or "replicating data across two zones in a single region is sufficiently robust for our needs".</p>

<p>Disregard anyone who argues this point by invoking phrases like "reinventing the wheel". Those people are not on the hook for maintaining your infrastructure, you are.</p>

<h4 id="dontaddtoolsuntilyoureforcedto">Don't add tools until you're forced to</h4>

<ul>
<li>TF Cloud is fine.</li>
<li>Spacelift is fine.</li>
<li>Atlantis is fine.</li>
<li>Terragrunt is fine I guess, I've never used it.</li>
<li>I do not understand the point of terratest at all.
<ul><li>Please do not try to explain it to me, I don't care.</li></ul></li>
<li>I'm sure the others I've forgotten are cool too.</li>
</ul>

<p>Don't use them unless you ABSOLUTELY NEED THEM (see rule 1). For some reason everyone is happy to say "Kubernetes is overkill for most teams" but nobody wants to say "TF Cloud is overkill for most teams". </p>

<p>Well I'm sayin it!</p>

<p>Many platform / SRE teams are 1-4 people and you can get most of the value of collaboration tools like Atlantis from a little team communication and good PR hygeine. </p>

<p>Iterating on a broken plan/apply cycle with a tool like Atlantis in the developer loop <em>sucks</em>. Just tell the team you're planning / applying from your laptop while you iterate, trust the <a href="https://developer.hashicorp.com/terraform/language/state/locking">state lock</a> to avoid collisions, and communicate the results when you're finished.</p>

<p>With respect to the tools that don't focus on collaboration; you can get very far using just terraform and some project structure. If you do not have a rock solid reason to adopt them, don't.</p>

<h4 id="pursueconsistencywhenthreading">Pursue consistency when "threading"</h4>

<p>A big terraform project involves repeatedly passing information from one place to the next, and then when terraform is complete it is often necessary to pass <a href="https://developer.hashicorp.com/terraform/language/values/outputs">outputs</a> into something else like an ansible playbook or a helm chart.</p>

<p>eg.</p>

<pre><code>variable "domain" {  
  type = string
  default = "foo.com"
}

module "dns" {  
  root = var.domain
}

module "loadbalancer" {  
  hostname = module.dns.registered_domain
}

output "monitoring_endpoint" {  
  value = module.loadbalancer.external_name
}
</code></pre>

<p>Here the domain for a project gets "threaded" from a variable called <code>domain</code> into the DNS module as a parameter called <code>root</code> which outputs a value called <code>registered domain</code> (maybe having prefixed <code>www</code> or something) which is passed to the LB module as a parameter called <code>hostname</code>, then output again as <code>external_name</code> (maybe having added <code>https://</code>) and finally output from terraform into some monitoring tool as <code>external_name</code>. This (not as contrived as you might think) example passes the same data around with <strong>seven different labels</strong>.</p>

<p>Could you make the argument that each use of this data happens in its own domain and has its own internal data model that makes sense for that use case? Sure.</p>

<p>Fight back against that argument.</p>

<p>Try to use a consistent name across the entire lifecycle of the data as it gets passed around. Reduce the number of things you have to look at to understand what's happening (see rule 1).</p>

<h4 id="dontrelyonmemory">Don't rely on memory</h4>

<p>Avoid anything that requires you to <strong>remember to do something before you apply</strong>. Don't set it up so you have to export the correct <code>AWS_SECRET_ACCESS_KEY</code>. Instead hard code a <a href="https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-files.html#cli-configure-files-format-profile">named profile</a> or a specific IAM role in the provider config.</p>

<p>Don't require your team to switch to the correct workspace before applying, in fact <a href="https://developer.hashicorp.com/terraform/cli/workspaces#when-not-to-use-multiple-workspaces">don't use workspaces</a> at all.</p>

<p>It's too easy to accidentally bulldoze production this way. Make it <strong>impossible to fuck up</strong>.</p>

<h4 id="style">Style</h4>

<p>This is just a random smattering of preferences which help improve maintainability.</p>

<hr>

<p><strong>Do not put everything into one huge pile</strong>, separate things into separate terraform states. There are lots of guides on how to structure this. <a href="https://www.antonbabenko.com/how-i-structure-terraform-configurations-2016-archived/">This one</a> is fine but there are plenty of others. Choose one.</p>

<hr>

<p><strong>Never use <code>count</code></strong> to create more than one of something. <code>count</code> should only ever be used when you want to create something if some condition is true, and not create it otherwise. If you want to create more than one of something, use <code>for_each</code>.</p>

<p>I searched for a better source to explain why but <a href="https://old.reddit.com/r/Terraform/comments/xe1sgi/why_do_so_many_people_hate_count/">this was the most succinct</a> thing I could find.</p>

<hr>

<p>Arguments to resources are the domain of the provider and you are beholden to them. AWS tags can only contain certain characters while K8s labels can only contain others. Don't fret about what goes in there, but <strong>terraform resource names are yours</strong>. You decide how they look and feel.</p>

<ul>
<li>Never capitalize them.</li>
<li>Never use anything but underscores as separators.</li>
<li>Never put the resource <em>type</em> in the name.
<ul><li>The type is right there next to it. This tip brought to you by the department of redundancy department.</li></ul></li>
</ul>

<h3 id>🤮</h3>

<p><code>resource "aws_ec2_instance" "Nginx-Instance"</code></p>

<h3 id>🤩</h3>

<p><code>resource "aws_ec2_instance" "nginx"</code></p>]]></content:encoded></item><item><title><![CDATA[reproing a repro of an old istio vulnerability]]></title><description><![CDATA[<p>tl;dr - see my repro repo <a href="https://github.com/rsalmond/CVE-2021-34824">here</a>.</p>

<p>I recently got nerd-sniped by a <a href="https://www.cyberark.com/resources/threat-research-blog/what-i-learned-from-analyzing-a-caching-vulnerability-in-istio">blog post that explored an old istio vulnerability</a>. The vulnerability meant that istio users whose access to Kubernetes secrets was restricted by RBAC policies could circumvent those policies in order to access kubernetes secrets in any</p>]]></description><link>https://rob.salmond.ca/reproing-a-repro-of-an-old-istio-vulnerability/</link><guid isPermaLink="false">34680c1e-3946-4528-b374-099e6d4e4750</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Thu, 29 Dec 2022 17:53:02 GMT</pubDate><content:encoded><![CDATA[<p>tl;dr - see my repro repo <a href="https://github.com/rsalmond/CVE-2021-34824">here</a>.</p>

<p>I recently got nerd-sniped by a <a href="https://www.cyberark.com/resources/threat-research-blog/what-i-learned-from-analyzing-a-caching-vulnerability-in-istio">blog post that explored an old istio vulnerability</a>. The vulnerability meant that istio users whose access to Kubernetes secrets was restricted by RBAC policies could circumvent those policies in order to access kubernetes secrets in any namespace that istio had access to (which is all of them by default).</p>

<p>I remember when this bugfix shipped and I didn't think much of it at the time. I got confused trying to follow the steps and then realized there were some key bits of information missing. To be fair to the author, as someone who teaches people to use istio for a living I appreciate how hard it is to explain this stuff. This is not intended as a critique of the author but an addition to the conversation.</p>

<p>The repro environment involves deploying istio with two custom ingress gateways into separate namespaces called <code>ns-a</code> and <code>ns-b</code>. The author advises readers to do this by the use of two different <a href="https://istio.io/latest/docs/setup/additional-setup/customize-installation/">customized installation profiles</a> however in the browsers I tried the YAML rendering in the blog post seemed mangled somehow which made it hard to read. It is unnecessary to use two separate profiles to achieve the desired configuration of a gateway in each namespace so I merged them <a href="https://github.com/rsalmond/CVE-2021-34824/blob/main/manifests/istio-profile.yaml">into a single profile</a>.</p>

<p>Also the provided installation profile installs istio without any of the CRD's or the istiod controller which provides the gateway injection webhook functionality. As it is impossible to reproduce this issue as described without both of those things, my modified profile installs them as well.</p>

<p>The subsequent steps outlined in the post involve modifying and apply more of those hard to read manifests. This was a bit fiddly so I <a href="https://github.com/rsalmond/CVE-2021-34824/tree/main/manifests">have checked-in formatted versions in here</a>.</p>

<p>After applying these manifests the cluster will have a <a href="https://github.com/rsalmond/CVE-2021-34824/blob/main/manifests/a/nginx.yaml">toy workload</a> (I used nginx) in each namespace, a <a href="https://github.com/rsalmond/CVE-2021-34824/blob/main/manifests/a/vs.yaml">virtualservice</a> in each namespace to route traffic to the toy workload, and a <a href="https://github.com/rsalmond/CVE-2021-34824/blob/main/manifests/a/gateway.yaml">gateway configuration</a> in each namespace to terminate TLS. TLS termination is performed using a <a href="https://github.com/rsalmond/CVE-2021-34824/blob/main/deploy_test.sh#L14-L15">self-signed certificate and key</a> found in a kubernetes secret called <code>foo-dot-com</code>. Crucially however, the secret <em>only</em> exists in the <code>ns-a</code> namespace.</p>

<p>If the istio version being tested is vulnerable to the security issue, then it will be possible to connect to the gateway in namespace <code>ns-b</code> using TLS, despite the fact that the secret necessary for terminating TLS is not there.</p>

<p>If the istio version being tested is not vulnerable, then it will not be possible and the gateway will reset the connection. I have included a <a href="https://github.com/rsalmond/CVE-2021-34824/blob/main/evaluate_test.sh">test script</a> to validate this.</p>]]></content:encoded></item><item><title><![CDATA[how to make a pixel watch shut the fuck up]]></title><description><![CDATA[<p>The default notification experience of a pixel watch is roughly akin to strapping the whole internet directly to your body and cranking it. Here's what you need to do disable ALL the notifications.</p>

<ol>
<li><p>In the pixel watch app on your phone, under Notifications, deselect all the watch apps and all</p></li></ol>]]></description><link>https://rob.salmond.ca/how-to-make-a-pixel-watch-shut-the-fuck-up/</link><guid isPermaLink="false">73e46e8a-b198-4a28-aa91-20d6a6ed1ea6</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Thu, 03 Nov 2022 12:31:11 GMT</pubDate><content:encoded><![CDATA[<p>The default notification experience of a pixel watch is roughly akin to strapping the whole internet directly to your body and cranking it. Here's what you need to do disable ALL the notifications.</p>

<ol>
<li><p>In the pixel watch app on your phone, under Notifications, deselect all the watch apps and all the phone apps. In the General section disable "auto turn on notifications for new apps". Under the "mute notifications" section, enable all the "mute ..." options in the Watch section.</p></li>
<li><p>In the watch itself open the settings (cog icon), Apps &amp; Notifications, Do Not Disturb, enable DND.</p></li>
<li><p>In the fitbit app on your phone, click the profile picture icon in the upper left corner of the app. Then click the "Google Pixel Watch" item in the list (surprise, that's a button!), then disable "Reminders to Move". While you're in there make sure that "Main Goal" is set to "Steps". Back out of there and click the "Activity &amp; Wellness" item under the SETTINGS heading, then click "Daily Activity", and set the "Steps Goal" to 999,999 (the highest it will accept. Now never take more than a million steps and you won't be interrupted again.</p></li>
</ol>

<p>Nice enough watch I guess but it is <em>extremely</em> one point oh.</p>]]></content:encoded></item><item><title><![CDATA[istio, gateways, and ingress gateways]]></title><description><![CDATA[<p>I've been using Istio <sup id="fnref:1"><a href="https://rob.salmond.ca/istio-gateways-and-ingress-gateways/#fn:1" rel="footnote">1</a></sup> a fair bit recently and have started hanging out in <a href="https://slack.istio.io">the community Slack</a>. I don't really feel like I understand something until I can explain it to someone else, so as time allows I've been trying to answer some of the questions I find.</p>

<p>One</p>]]></description><link>https://rob.salmond.ca/istio-gateways-and-ingress-gateways/</link><guid isPermaLink="false">a9b48c68-384d-4caa-ad5a-3eacbc641820</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Mon, 26 Oct 2020 03:11:55 GMT</pubDate><content:encoded><![CDATA[<p>I've been using Istio <sup id="fnref:1"><a href="https://rob.salmond.ca/istio-gateways-and-ingress-gateways/#fn:1" rel="footnote">1</a></sup> a fair bit recently and have started hanging out in <a href="https://slack.istio.io">the community Slack</a>. I don't really feel like I understand something until I can explain it to someone else, so as time allows I've been trying to answer some of the questions I find.</p>

<p>One thing I've seen come up several times is folks asking about best practices for <a href="https://istio.io/latest/docs/reference/config/networking/gateway/">Istio Gateways</a>. Specifically, should you have one Gateway or many? What about Ingress Gateways?</p>

<p>In keeping with tradition, a standard issue "it depends" is the correct answer to these questions, but if you just need a quick suggestion to get you started here you go.</p>

<ul>
<li>Gateways: you should probably have as few as you can get away with, but also it probably doesn't matter.</li>
<li>Ingress Gateways: you should probably have one.</li>
</ul>

<h5 id="ingressgateways">Ingress Gateways</h5>

<p><img src="https://rob.salmond.ca/content/images/2020/10/d0e24b03f9b65910584d944e7de63694.jpg" alt="not strictly true"></p>

<p><center> <br>
(not <del>strictly</del> true)
</center></p>

<p>An Istio Ingress Gateway is a Kubernetes Deployment that consists of <em>just</em> an Envoy proxy. Unlike the other proxies in the mesh which are bolted on to your workloads in sidecar fashion, the Ingress Gateway proxies sit on their own at the edge of the mesh accepting outside traffic and directing it inwards, like a load balancer in your cluster.</p>

<p>It may seem at first that having multiple Ingress Gateways can be a strategy for high availability, but a single Ingress Gateway Deployment can be as highly available as any other Deployment by way of multiple replicas across multiple zones and by scaling in response to traffic by way of its Horizontal Pod Autoscaler.</p>

<p>If your traffic scale is truly large or your isolation needs are extreme, then like with traditional load balancers those could be reasons to have more than one Ingress Gateway, but if you can't think of any specific reasons right now then start with one.</p>

<h5 id="gateways">Gateways</h5>

<p>Unlike an Ingress Gateway an Istio Gateway does not map to a Kubernetes Deployment. It is purely an Istio configuration object which Istio consumes and uses to program the lower level Envoy config on your behalf.</p>

<p>Specifically, a Gateway will inform the Envoy Listener configuration. But unlike Istio Virtual Services which also control Envoy Listeners throughout the mesh, a Gateway object will <em>only</em> apply to those Envoy Listeners on the Ingress Gateway Deployment.</p>

<p>Ugh, this stuff is terminology word salad.</p>

<p>Basically you can think of an Istio Gateway object kind of like an nginx <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#server"><code>server</code> block</a> with fewer options. It tells the Ingress Gateway Deployment:</p>

<ul>
<li>which port(s) to listen on</li>
<li>which protocol(s) to listen for</li>
<li>which hostname(s) to handle requests for</li>
<li>which TLS certificate(s) to use</li>
</ul>

<p>Where I once configured this blog with an nginx <code>server</code> block like this.</p>

<pre><code>server {  
  listen 443 ssl;
  server_name rob.salmond.ca;

  ssl_certificate /etc/letsencrypt/live/rob.salmond.ca/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/rob.salmond.ca/privkey.key;

  location / {
    proxy_pass http://localhost:2370;
    proxy_redirect off;
  }
}
</code></pre>

<p>I now use an Istio Gateway config object like this.</p>

<pre><code class="language-#yaml">apiVersion: networking.istio.io/v1beta1  
kind: Gateway  
metadata:  
  name: my-gateway
spec:  
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - 'rob.salmond.ca'
    port:
      name: http
      number: 80
      protocol: HTTP
  - hosts:
    - 'rob.salmond.ca'
    port:
      name: https-default
      number: 443
      protocol: HTTPS
    tls:
      credentialName: ingress-cert
      mode: SIMPLE
</code></pre>

<p>Note that routing traffic to the NodeJS app (<code>localhost:2370</code>) is not handled here. Istio uses Virtual Services to configure the matching of requests to their corresponding apps. Also rather than a full path to a file the TLS certificate is loaded from a Kubernetes Secret called <code>ingress-cert</code>.</p>

<h5 id="greatbuthowmanygatewaysdoineed">Great but how many Gateways do I need?</h5>

<p>If you're just serving traffic for a single domain, or even a wildcard domain, you only need one Gateway. By separating the routing configuration into Virtual Services Istio just needs your Gateway to indicate that it is hosting <code>*.sweetdomain.com</code> while you use Virtual Services to route traffic for <code>www.sweetdomain.com</code>, <code>api.sweetdomain.com</code>, <code>signup.sweetdomain.com</code>, etc.</p>

<p>If you're serving traffic for a lot of different domains you can either pack them all onto a single Gateway object by appending more <code>hosts</code> entries onto the <code>servers</code> list, or create a separate (and more succinct) Gateway object for each and every domain. Which way you go will be a combination of taste and the limitations of whatever god awful YAML templating tool you find yourself chained to.</p>

<p>Why is it a matter of taste? (or why does it "probably not matter"?)</p>

<p>Because with both of these approaches, under the hood Istio is going to turn these configurations into Envoy Listeners for you, and there will only be one Listener for each unique host:port pair. For your average web app this means that no matter how many Gateways you create, or <code>hosts</code> you add to one Gateway, you will end up with just one <code>0.0.0.0:8080</code> Listener and one <code>0.0.0.0:8443</code> Listener (these port numbers in Istio are += 8000 to avoid running the proxy as root).</p>

<p>For every <code>host</code> you add Istio will append its configuration <sup id="fnref:2"><a href="https://rob.salmond.ca/istio-gateways-and-ingress-gateways/#fn:2" rel="footnote">2</a></sup> to the corresponding Listener. When requests come in on that Listener, Envoy will either look at the <code>Host</code> or <code>Authority</code> headers or use SNI inspection to figure out which domain the connection is destined for then pass it off to the corresponding Envoy Route (where Istio has sent your Virtual Service configurations to handle request matching) for forwarding to the correct place.</p>

<p>I suppose that regardless of which approach you use to add all your different domains you could wind up with a Listener config so large it blows Envoy out in some interesting way, but if you get to that point then I'm afraid you will need more insight than this blog post can offer.</p>

<div class="footnotes"><ol><li class="footnote" id="fn:1"><p>Version 1.6.3 <a href="https://rob.salmond.ca/istio-gateways-and-ingress-gateways/#fnref:1" title="return to article">↩</a></p></li>
<li class="footnote" id="fn:2"><p>At time of writing it will use a <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/listener/listener_components.proto#listener-filterchainmatch">filterChainMatch</a> but this is a fairly in-the-weeds implementation detail, the whole point of Istio is to isolate you from low level Envoy stuff. <a href="https://rob.salmond.ca/istio-gateways-and-ingress-gateways/#fnref:2" title="return to article">↩</a></p></li></ol></div>]]></content:encoded></item><item><title><![CDATA[replacing unreadable terraform 11 workarounds with beautiful terraform 12 constructs]]></title><description><![CDATA[<p>On some greenfield infrastructure efforts at work I recently decided to take the not so recently released major update of Terraform 0.12 out for a spin. Specifically, they're up to 0.12.6 so many of the initial snags have been ironed out and it seemed like a good</p>]]></description><link>https://rob.salmond.ca/replacing-unreadable-terraform-11-workarounds-with-beautiful-terraform-12-constructs-2/</link><guid isPermaLink="false">cebbc3bd-7f28-4b41-952d-3dc50e28b0a4</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Sat, 17 Aug 2019 23:14:07 GMT</pubDate><content:encoded><![CDATA[<p>On some greenfield infrastructure efforts at work I recently decided to take the not so recently released major update of Terraform 0.12 out for a spin. Specifically, they're up to 0.12.6 so many of the initial snags have been ironed out and it seemed like a good opportunity to kick the tires.</p>

<p>For some years now I have been saying that (pre 0.12) Terraform is better thought of more like XML than a DSL. Originally the project founder <a href="https://github.com/hashicorp/terraform/issues/1604#issuecomment-96992222">wanted to keep HCL "as simple as possible"</a> and the more familiar programming language like constructs of looping and conditionals were conceded to later in the projects life. However since these were addons and the project was not designed with them in mind they lead to some <a href="https://github.com/hashicorp/terraform/issues/14275">scary issues</a>. That one in particular bit me hard in a way that deleted IAM accounts for 2/3rds of our engineering team.</p>

<p>While working with 0.12 though I have managed to add three dramatic improvements to some of our modules by leveraging some awesome new features.</p>

<p>First some background. We have a module which creates GKE clusters and due to the varying needs of our projects sometimes we want one with a default node pool and sometimes we don't. (It doesn't matter if none of that means anything to you, we'll go with simple examples).</p>

<p>Like many Terraform resources, the config for a GKE cluster is complex and requires both arguments and config blocks.</p>

<pre><code>resource "my_resource" "foo" {  
  argument = "value"
  config_block {
    config_a = 1
    config_b = 2
  }
}
</code></pre>

<p>It has always been straightforward with Terraform to pass a variable in and assign it to a value, but until 0.12 it was not possible to optionally specify a config block. It's either on the resource or it's not. In our case we wanted to support the creation of resources which had mutually exclusive combinations of arguments and config blocks. So the workaround looked like this.</p>

<pre><code>resource "my_resource" "variant_a" {  
  count    = "${var.enable_variant_a ? 1 : 0}"
  argument = "value"
}

resource "my_resource" "variant_b" {  
  count    = "${var.enable_variant_a ? 0 : 1}"
  config_block {
    config_a = 1
    config_b = 2
  }
}
</code></pre>

<p>In this way we can achieve the desired result, but it's kinda a gross workaround that results in code duplication and is generally the sort of thing you find <em>all over</em> a complex legacy Terraform code base.</p>

<p>We also needed this module to output the name of the resource regardless of which variant was created so we defined an output.</p>

<pre><code>output "resource_name" {  
  value = "${var.enable_variant_a ? join("", my_resource.variant_a.*.name) : join("", my_resource.variant_b.*.name) }"
}
</code></pre>

<p>Hideous.</p>

<p>This is where we introduce our first dramatic improvement with a new Terraform 0.12 feature, <a href="https://www.terraform.io/docs/configuration/expressions.html#string-templates">string templates</a>.</p>

<p>The addition of conditional directives in strings allowed us to take the above monstrosity and change it to this.</p>

<pre><code>output "resource_name" {  
  value = "%{ if var.enable_variant_a }${my_resource.variant_a[0].name}%{ else }${my_resource.variant_b[0].name}%{ endif }
}
</code></pre>

<p>Still burly but infinitely more readable.</p>

<p>Our second dramatic improvement is enabled by combining three new features, <a href="https://www.hashicorp.com/blog/terraform-0-12-rich-value-types">complex types</a>, the <a href="https://www.terraform.io/docs/configuration/expressions.html#types-and-values">new <code>null</code> type</a>, and <a href="https://github.com/hashicorp/terraform-guides/tree/master/infrastructure-as-code/terraform-0.12-examples/dynamic-blocks-and-splat-expressions">dynamic blocks</a>. Using these we eliminated the need for duplicate resource variants.</p>

<p>First we constructed a complex type to describe all the configuration required for our resource.</p>

<pre><code>variable "my_resource_config" {  
  type = object({
    name                = string
    location            = string
    enable_variant_a    = bool
    config_block = list(object({
      config_a = number
      config_b = number
    }))
  })
}
</code></pre>

<p>Note the inclusion of the mutually exclusive options. Also note that this variable definition needs to exist both in the module as a definition of the interface to the module and at the calling site so a correctly typed instance of the variable can be populated. I opted to separate this complex type from the other variables being passed to the module and put it in it's own file so I could symlink it into the calling Terraform space to avoid duplication (we use symlinks as Terraform "include" statements a fair bit).</p>

<p>Next we populate the values of this variable type in our <code>terraform.tfvars</code> file. </p>

<pre><code>my_resource_config = {  
  "name"                = "sweet_resource"
  "location"            = "us-central1"
  "enable_variant_a"    = true
  "config_block"        = []
}
</code></pre>

<p>Finally we add the dynamic block to the resource itself.</p>

<pre><code>resource "my_resource" "resource" {  
  name     = var.my_resource_config.name
  location = var.my_resource_config.location
  argument = var.my_resource_config.enable_variant_a
  dynamic "config_block" {
  for_each = var.my_resource_config.config_block
    content {
      config_a = config_block.value.config_a
      config_b = config_block.value.config_b
    }
  }
}
</code></pre>

<p>Note that to access the elements of the <code>config_block</code> on our complex type we reference the name of the dynamic block (also <code>config_block</code>) followed by <code>.value</code> followed by the element name. If this is confusing you can add the line <code>iterator = some_other_name</code> to use in place of the name of the dynamic block.</p>

<p>With this approach the values we've set in <code>my_resource_config</code> will cause the boolean value to be passed to our argument to enable whatever feature we want and the empty list in the <code>config_block</code> field will mean the <code>for_each</code> loop in the dynamic block never runs and the block is not added.</p>

<p>To flip this around we define the config this way.</p>

<pre><code>my_resource_config = {  
  "name"                = "sweet_resource"
  "location"            = "us-central1"
  "enable_variant_a"    = null
  "config_block"        = [
    "config_a" = 1
    "config_b" = 2
  ]
}
</code></pre>

<p>The presence of the null type means that Terraform will not actually set the value of the argument and will not complain about a type mismatch, while the presence of a list of objects in <code>config_block</code> means the loop will now execute and the dynamic block is populated. Note that there are some resources which support multiple copies of a dynamic block (eg. <a href="https://www.terraform.io/docs/providers/aws/r/security_group.html">ingress rules on a security group</a>) and in those cases you can just add more elements to the list.</p>

<p>This second improvement also means that the first improvement with the string templates actually gets tossed out because we no longer have multiple variants of our resource we can just output <code>my_resource.resource.name</code>, another win!</p>

<p>The third improvement happened on another module that combines the elements of the second improvement with the addition of a <code>for_each</code> construct on a resource itself as well as on a dynamic block.</p>

<p>For this resource our config object looks like this.</p>

<pre><code>variable "my_resource_configs" {  
  type = map(object({
    name             = string
    location         = string
    enable_variant_a = bool
    config_block = list(object({
      config_a = number
      config_b = number
    }))
  }))
}
</code></pre>

<p>Note that in contrast to our first config object the root begins with a map of objects rather than an object. This allows us to pass multiple configs in and iterate over them.</p>

<pre><code>my_resource_configs = {  
  "default" = {
    "name"             = "default"
    "location"         = "us-central1"
    "enable_variant_a  = null
    "config_block" = [
      {
        "config_a" = 1
        "config_b" = 2
      }
    ]
  }
  "variant" = {
    "name"              = "variant"
    "location"          = "us-central1"
    "enable_variant_a   = true
    "config_block"      = []
  }
}
</code></pre>

<p>And in our module we define our resource like so.</p>

<pre><code>resource "my_resource" "resource" {  
  for_each = var.my_resource_config
  name     = each.value.name
  location = each.value.location
  argument = each.value.argument

  dynamic "config_block" {
    for_each = each.value.config_block
    content {
      config_a = config_block.value.config_a
      config_b = config_block.value.config_b
    }
  }
}
</code></pre>

<p>Now we iterate over each of the config items safely (we could use this approach to create IAM accounts and not nuke them when somebody quits and we remove an username from a list of users passed in) and when we come to the dynamic block we further iterate over the elements of that list.</p>

<p>These improvements only touched two Terraform modules in our codebase but resulted in the elimination of over 150 lines of legacy boilerplate and workarounds. Probably my favourite pull request to date</p>]]></content:encoded></item><item><title><![CDATA[automatically calibrate ppm for rtl-sdr]]></title><description><![CDATA[<p>Note: I found this post sitting in my draft posts folder, it is over two years old but I figured I'd publish it anyway.</p>

<p>I recently discovered an rtl-sdr project of mine had stopped receiving data. After a bit of investigation it turned out the ppm error value I had</p>]]></description><link>https://rob.salmond.ca/automatically-calibrate-ppm-for-rtl-sdr/</link><guid isPermaLink="false">c45b36c5-8344-4ab1-950f-3e5542232cd1</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Fri, 16 Aug 2019 22:33:28 GMT</pubDate><content:encoded><![CDATA[<p>Note: I found this post sitting in my draft posts folder, it is over two years old but I figured I'd publish it anyway.</p>

<p>I recently discovered an rtl-sdr project of mine had stopped receiving data. After a bit of investigation it turned out the ppm error value I had previously calculated seems to have drifted over time. </p>

<p>PPM is a unit used to measure the variance in a crystal oscillator between the frequency it is meant to operate at and the frequency it actually operates at. When using an rtl-sdr you need to calculate this variance and account for it in your tuning software, eg. <code>rtl_fm -p &lt;ppm goes here&gt;</code>.</p>

<p>Apparently <a href="http://electronicdesign.com/analog/minimize-frequency-drift-crystals">temperature is a factor</a> and it has been chilly lately so I'm guessing that's the culprit.</p>

<p>The first time around I had followed <a href="http://www.satsignal.eu/raspberry-pi/acars-decoder.html">these instructions</a> and computed the PPM for my tuner manually. Since the project I'm operating doesn't need to operate at night (because <a href="https://rob.salmond.ca/seabus-tracking/">the boats</a> aren't running) I started thinking that with some scripty goodness I could just recalibrate the tuner nightly to avoid future downtime.</p>

<p>So I hacked something up to do just that, again using the kalibrate-rtl program which computes PPM by checking known base frequencies of cell phone channels. My script checks the three strongest nearby cellular channels three times each and then compute the average.</p>

<p>The problem is it doesn't work. Each cellular channel it checks corresponding to a separate cell tower was producing a wildly different PPM value from the others. It turns out I'm <a href="https://www.reddit.com/r/RTLSDR/comments/3sj650/range_of_results_from_kalibrate/">not the first</a> person to notice this. Basically the reason you can't use these supposedly "known good" base signals to calculate error from is that they also contain a small amount of error and the errors are different from tower to tower.</p>

<p>I found references in various HAM radio forums about using NOAA weather signals to manually compute PPM as they're known to be reliably well tuned, however it wasn't something I could easily script so I abandoned that approach.</p>

<p>As it turns out with some further reading I discovered that the RTL-SDR library itself ships with a program to compute the PPM error using a completely different approach. <code>rtl_test</code> computes the PPM error by comparing the the oscillator in the radio to the oscillator in your computers clock simply by checking the time repeatedly. Much simpler, faster, and reliable!</p>

<p>For this valuable contribution the author received only a paltry six upvotes on <a href="https://www.reddit.com/r/RTLSDR/comments/10rh9d/prototype_ppm_error_measurement/">his post</a> in the RTL-SDR subreddit. Turns out the same guy also wrote the <code>rtl_fm</code> program I'm using to tune  my project as well as a program called <code>rtl_power</code> which, among other things, can be used to do <a href="http://kmkeen.com/rtl-power/">passive radar tracking</a>. </p>

<p>Bad <em>ass!</em></p>

<p>So for helping me fix my project and for all your awesome contribution to the RTL-SDR community thank you Kyle Keen!</p>]]></content:encoded></item><item><title><![CDATA[chaining blocks for disappointment and failure]]></title><description><![CDATA[<p>I recently took a new job at a pretty large enterprise in one of the security groups. I happened to join just as they kicked off an annual CTF contest so I signed up on day one and started gleefully hacking about. I was doing pretty well holding a spot</p>]]></description><link>https://rob.salmond.ca/chaining-blocks-for-disappointment-and-failure/</link><guid isPermaLink="false">4ae4cd8c-310a-4028-a469-251c2a285d77</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Thu, 09 Nov 2017 04:33:24 GMT</pubDate><content:encoded><![CDATA[<p>I recently took a new job at a pretty large enterprise in one of the security groups. I happened to join just as they kicked off an annual CTF contest so I signed up on day one and started gleefully hacking about. I was doing pretty well holding a spot in the top ten amid some fifty competitors, had fun factoring some weak RSA keys, implementing a login timing attack, and recovering files encrypted by a poorly written ransomware. </p>

<p>Then I came to a challenge that required implementing and mining a blockchain. Given a particular genesis block, difficulty requirements, and a hash algorithm you had to provide a chain of at least four valid blocks to retrieve the flag. </p>

<p>Specifically, the genesis block looked like this.</p>

<pre><code>{
"identifier": "000102030405060708090A0B0C0D0E0F", 
"nonce": 3754873684,
"data": "Genesis Block for CTF contest, all block chains must start with this block. This is equivalent to Big Bang, time didn't exist before this",
"previous_hash": null
}
</code></pre>

<p>The hash algorithm looked like this.</p>

<pre><code>def hash_block(block):  
    message = hashlib.sha256()
    message.update(str(block['identifier']).encode('utf-8'))
    message.update(str(block['nonce']).encode('utf-8'))
    message.update(str(block['data']).encode('utf-8'))
    message.update(str(block['previous_hash']).encode('utf-8'))
    return message.hexdigest()
</code></pre>

<p>And the difficulty for each block was pre-assigned in the following order: 8 for the genesis block, then 4, 4, 5, 6, 7, 9, 11, 13, and 16 for the remaining blocks. In this case difficulty is defined as the number of leading zeros in the resulting hash.</p>

<p>The content of the data field was dealers choice, so I populated with a poem I'm fond of and implemented the miner over my lunch break. The <a href="https://github.com/rsalmond/ctfblockchain/blob/master/pythonminer/miner.py">result</a> produced about 100k hashes per second on my macbook, randomly hashing the block with a different <code>nonce</code> value until hitting upon one which produced a hash that satisfied the difficulty requirement. It completed the required four blocks before I had finished eating. I submitted the chain and collected my flag and went back to work.</p>

<p>But one thing kept bothering me. I kept thinking about the fact that even though it took only four blocks to collect the flag the instructions provided difficulty values for a ten block chain. What would happen if I submitted all ten blocks? Bonus points? A hidden challenge? My weight in dogecoins?</p>

<p>I stopped working on all the remaining challenges and focused on completing the chain. Since the new gig is a Go heavy shop and my Go game is weak I decided to port the miner to Go for a compiled language speed up. First step was to see if I could replicate the hashing algorithm which turned out to be pretty straightforward with the standard library <code>crypto/sha256</code> package.  </p>

<pre><code>func (b *Block) hash() []byte {  
    h := sha256.New()
    h.Write([]byte(b.Identifier))
    h.Write([]byte(strconv.Itoa(b.Nonce)))
    h.Write([]byte(b.Data))
    h.Write([]byte(b.Previous_hash))
    return h.Sum(nil)
}
</code></pre>

<p>I can see why Python people like Go, it's pretty intuitive. This bought me about an 8x speed up. My macbook was churning out 800k hashes per second which ripped through the next few blocks rapidly up until it hit block 7 with a difficulty rating of 11, and there it stayed for quite a while. </p>

<p>Recalling back to the <a href="https://rob.salmond.ca/the-cut-throat-side-of-the-coin/">early days of bitcoin mining</a> I decided that a mining pool seemed like a good approach. I have nerdy friends, they have computers, by our powers combined we can do a thing right?</p>

<p>So I implemented a quick and dirty <a href="https://github.com/rsalmond/ctfblockchain/blob/master/blockserver/server.py">web app</a> to loosely coordinate a distributed pool of miners. I made sure the miners would generate the same <code>identifier</code> field for a given block where previously it had also been random, so the fleet would be working on the same problem. Then I had them poll the server every thirty seconds to see if anyone else had mined a block. Stumbling about with my new Go legs it took about a day and a half to get the server and the miner working in tandem.</p>

<p><img src="https://rob.salmond.ca/content/images/2017/11/gominer.png" alt></p>

<p>On Friday the 3rd I started soliciting friends to run the miner.</p>

<p><img src="https://rob.salmond.ca/content/images/2017/11/screen.png" alt></p>

<p>I was testing the pool with my work macbook, my personal laptop, and my desktop. All told these were combining to produce about 2.4 million hashes per second. There seemed to be some upper limit at the 800k mark that didn't seem to vary much with CPU speed. As friends came on board the pool hashrate began to climb. Slowly.</p>

<p>The ones with beefy gaming PC's complained that their many cores remained idle while mining so I set about learning how to use Go's concurrency to make use of them. Eventually I got it working after faffing about with channels and worker pools and whatnot. The code is <a href="https://github.com/rsalmond/ctfblockchain/blob/master/miner.go">here</a> and it's hideous and no doubt rife with bugs. I haven't had to deal with pointers since the 90's and my approach is roughly on par with one that <a href="http://kuebri.ch">a friend</a> once half jokingly suggested, "Keep adding asterisks and ampersands until it works".</p>

<p>Anyway it's ugly but functional, the concurrent miner spawned a dedicated goroutine for each available core on the system and did a great job pinning the entire CPU to 100% usage.</p>

<p>We went from this.</p>

<p><img src="https://rob.salmond.ca/content/images/2017/11/idlecores.png" alt></p>

<p>To this.</p>

<p><img src="https://rob.salmond.ca/content/images/2017/11/Pasted-image-at-2017_11_04-04_08-PM.png" alt></p>

<p>Somewhere around this point on the evening of the Nov 4th I decided to start graphing the hashrate of the mining pool.</p>

<p><img src="https://rob.salmond.ca/content/images/2017/11/screen-1.png" alt></p>

<p>We were doing better but for the next two days the entire pool churned and churned on block 7 getting nowhere. I decided to stand up a bunch of cheap google cloud compute instances to help the effort. My friend Adam had a bunch of cores sitting around from a screw up with a cloud provider ages ago so he fired up a dozen miners in there. At its peak the pool hit nearly 50 million hashes per second. By then I'd had to fortify the web app with more workers and a proper MySQL rather than just SQLite.</p>

<p>Some time around 8 am on Sunday the 5th a google instance in South America with miner id <code>662F4F146E1504EA</code> mined the elusive block 7 and the whole fleet rolled over to working on block 8 with difficulty level 13. At this point we had three days to go till the end of the CTF contest. People had been suggesting it for a while and I'd been hoping to avoid it but I finally decided to cave and take a serious look at implementing a GPU accelerated miner.</p>

<p>I spent all day Sunday and most of Monday evening tinkering with a <a href="https://gist.github.com/allanmac/8745837">variety</a> of <a href="https://github.com/hashcat/hashcat">GPU based</a> sha256 <a href="https://github.com/magnumripper/JohnTheRipper">implementations</a>, even getting a little twitter help from <a href="https://gist.github.com/allanmac/8745837">one author</a>.</p>

<p>I managed to get a hash actually computed on my GPU but writing an algorithm to run at GPU scale parallelism is pretty far from the sort of software I normally write. I was just <a href="https://devblogs.nvidia.com/parallelforall/even-easier-introduction-cuda/">beginning to see</a> how I could divide the work up and was just skirting the outlines of how I might get meaningful answers back out, but with time running short and brain cells in short supply I sputtered to a stop a day before the contest ended.</p>

<p>Feeling pretty burnt out I threw up my hands in defeat. What I did learn was a fair bit about Go, who my most competitive friends are, and way more than I ever needed to know about how sha256 works. For the curious, here's my tl;dr.</p>

<p>Remember these puzzles?</p>

<p><img src="https://rob.salmond.ca/content/images/2017/11/puzzle.jpg" alt></p>

<p>That's pretty much how sha256 works. When you fire it up the innards are arranged in a standard configuration hand crafted from the finest artisanal entropy. Then the input data is padded to ensure its size is a multiple of 512 bits and it is shoved through the system 512 bits at a time. Each block of bits turns the crank and slides the configuration around in a consistent but input dependent way ensuring that if even one input bit is altered the sliders wind up in wildly varied final states.</p>

<p>And that's it. See? Cryptographically secure hashes aren't that hard. Now whatever you do don't go read <a href="http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf">FIPS-180</a>. That way lies total madness.</p>

<p>I hope the CTF is this much fun next year.</p>]]></content:encoded></item><item><title><![CDATA[simple tag ignore hook for urlwatch]]></title><description><![CDATA[<p>I recently set up <a href="https://github.com/thp/urlwatch">urlwatch</a> to alert me if some web pages I'm interested in are changed. It has a nice pushbullet integration and is pretty easy to set up. Too easy in fact. Pro tip, after configuring your preferred notification service and setting <code>enabled: true</code> you're done. I spent</p>]]></description><link>https://rob.salmond.ca/simple-tag-ignore-hook-for-urlwatch/</link><guid isPermaLink="false">7888d5ee-b38f-4df3-90a2-2a9ac08234a0</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Mon, 02 Oct 2017 14:57:53 GMT</pubDate><content:encoded><![CDATA[<p>I recently set up <a href="https://github.com/thp/urlwatch">urlwatch</a> to alert me if some web pages I'm interested in are changed. It has a nice pushbullet integration and is pretty easy to set up. Too easy in fact. Pro tip, after configuring your preferred notification service and setting <code>enabled: true</code> you're done. I spent a while faffing about thinking there had to be more to it. There isn't.</p>

<p>What I found however is that one of the pages I was monitoring had a dynamically generated <code>&lt;script&gt;</code> tag in it which was triggering spurious notifications I wanted to suppress. There didn't seem to be an obvious way to ignore particular tags so I created a simple hook to do this.</p>

<pre><code>from urlwatch import filters  
from urlwatch import jobs  
from urlwatch import reporters  
from bs4 import BeautifulSoup

class IgnoreFilter(filters.FilterBase):  
    __kind__ = 'ignore'

    def filter(self, data, subfilter=None):
        if subfilter is None:
            return data

        soup = BeautifulSoup(data, 'html.parser')
        for element in soup.select(subfilter):
            element.extract()
        return soup
</code></pre>

<p>This adds a new filter type called <code>ignore</code> which accepts a CSS selector as parameter. It then uses the magical <a href="https://www.crummy.com/software/BeautifulSoup/">BeautifulSoup</a> HTML parser to find all the elements which match the selector and remove them before returning the remaining HTML. </p>

<p>Urlwatch then does its normal comparison against the previous run to see if anything has changed and carries as usual. </p>

<p>To use the filter update your config like so altering the CSS selector to suit your needs.</p>

<pre><code>$ urlwatch --edit

---
name: "some site"  
url: "https://something.com/"  
filter: "ignore:body &gt; script:nth-of-type(2)"  
---
</code></pre>

<p>This ignores the second <code>&lt;script&gt;</code> tag beneath the <code>&lt;body&gt;</code>.</p>]]></content:encoded></item><item><title><![CDATA[developing in docker]]></title><description><![CDATA[<p>I've been job hunting recently and in keeping with tradition that means I've been working on some coding homework assignments. For one company which I was particularly hoping to impress I got a bit showy and put together a nice containerized environment to work in. I learned most of the</p>]]></description><link>https://rob.salmond.ca/developing-in-docker/</link><guid isPermaLink="false">9b4c78fc-c3ff-49d3-9c3f-654b3d7bf1c4</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Mon, 18 Sep 2017 00:38:13 GMT</pubDate><content:encoded><![CDATA[<p>I've been job hunting recently and in keeping with tradition that means I've been working on some coding homework assignments. For one company which I was particularly hoping to impress I got a bit showy and put together a nice containerized environment to work in. I learned most of the techniques for this approach working with a large dev team who built extensive tooling around their containerized dev environment to support over a dozen custom apps and at least as many supporting service containers.</p>

<p>In the weeks since submitting this project I've had a couple friends mention that they would like to learn more about working in docker so I've extracted the good bits and put them in a <a href="https://github.com/rsalmond/hello-compose">public repo</a>. Here I'll describe how it works and how to use it.</p>

<h6 id="theflaskappandconfiguration">The Flask App and Configuration</h6>

<p>I reached for Flask to build the web app as that's my go to framework and I wanted to build a strong submission, go with what you know as they say. I have replaced the business logic from the actual assignment so as not to provide reference material for future applicants but the structure is the same. </p>

<p>If you're looking to learn Flask there are better projects out there. I recommend <a href="https://github.com/mattupstate/overholt">Overholt</a> an oldie but a goody. <a href="https://github.com/sloria/cookiecutter-flask">Cookiecutter Flask</a> which is more geared towards full web apps than APIs. Or my favourite <a href="https://github.com/dimmg/flusk">Flusk</a>, a clean, fairly modern, and well organized Flask boilerplate.</p>

<p>The only thing worth mentioning with respect to docker is the way <a href="https://github.com/rsalmond/hello-compose/blob/master/hello/hello/config.py">configuration</a> is handled. There is a bit of extra logic in there to deal with the MySQL replicas (more on that below) but basically it just grabs the value of any environment variables prefixed with <code>HELLO_</code> and hangs them off the Flask config object. I took this approach because environment variable injection is the baseline approach for passing config into a running container both in docker-compose and practically every container orchestration system. This gives us an easy on ramp to move from dev to prod.</p>

<p>If you find yourself baking config files into your containers, or having some script in your container fetch a config file from somewhere you're gonna have a bad time. In that case just go straight to making your app <a href="https://www.consul.io/">Consul</a> or <a href="https://coreos.com/etcd/docs/latest/">etcd</a> aware and be done with it.</p>

<p>To make use of the read-only replica I used this <a href="https://github.com/peterdemin/python-flask-replicated/blob/master/flask_replicated.py">flask-replicated</a> extension which is a bit naive in that it uses the HTTP method rather than the database operation to decide which database to execute the query on. For example if you had some <code>user.last_accessed_on</code> datetime field that got updated on every page view this wouldn't cut it but it gets the job done for this simple app.</p>

<h6 id="thedockerfile">The Dockerfile</h6>

<p>One thing of note regarding the <a href="https://github.com/inrsalmond/hello-compose/blob/master/Dockerfile">dockerfile</a> is the separation of the <code>requirements.txt</code> file (the python version of a <code>Gemfile</code> or a <code>package.json</code> file) from the rest of the app in terms of <a href="https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/">layers</a>. Since each <code>ADD</code> statement creates a new layer in the image and minimizing the number of layers is best practice this may seem counter intuitive. </p>

<p>The idea here is to speed up build times. The build process will only rebuild those layers which have been modified since the last build, however it must then rebuild any layers built upon the modified layer. By placing the <code>requirements.txt</code> layer above the layer for the rest of the app code we ensure that rebuilding that layer (and the subsequent <code>apt-get install ... pip install ...</code> layer) only happens when the requirements change. Without this separation every single line of code we changed would mean a tedious rebuild of those layers.</p>

<p>The good news is that you don't need to rebuild the container every time you change the code though, next we'll look at how to hack in this environment.</p>

<h6 id="thedockercomposeandoverridefiles">The docker-compose and override files</h6>

<p>This is where much of the development magic happens, using docker-compose we can stand up all the dependencies our app(s) rely on. In this case two MySQL containers in a master/replica configuration (courtesy of <a href="https://github.com/twang2218/mysql-replica">Tao Wang</a>) and a memcached container.</p>

<p>There are a few things worth highlighting <a href="https://github.com/rsalmond/hello-compose/blob/master/docker-compose.yml">here</a>. First is the use of <code>healthcheck</code> and <code>restart</code> directives. These will both let you know if your services become unreachable for some reason and try to restart them in case they stop. Useful in dev when trying weird stuff is a common occurrence. </p>

<p>Next and more important is the use of explicit app level configuration for connecting to the services in the supporting containers, in this specific case by providing environment variables for memcached host and MySQL URI strings but this could be any app level config.</p>

<p>When linking containers via the docker-compose <code>depends-on</code> mechanism the Hello app could simply default to looking for the hostname <code>master</code> or <code>memcached</code> which would resolve to the correct container. However the pattern of using code level dev-default values, be they service dependencies or feature flags or basically anything that might be different in production, creates a minefield of unknown unknowns when it comes time to ship your containers. </p>

<p>By explicitly specifying these configurations during development we have a roadmap to follow when we deploy to Kubernetes or ECS or whatever else down the road. Believe me when I say that reverse engineering this sort of config without a guide <em>sucks</em>.</p>

<p>Finally we should look at the <a href="https://github.com/rsalmond/hello-compose/blob/master/docker-compose.override.dev.yml">docker-compose override file</a>. By default docker-compose will parse the main <code>docker-compose.yml</code> file and then update the config it finds there with any additions or changes it finds in <code>docker-compose.override.yml</code>. We can leverage this mechanism to provide a nice developer experience by setting up the primary docker-compose file with the assumption that every container in the stack will behave normally (that is start running the app it hosts) when it comes up. Then we can use the override file to knock out any container we care to hack on by replacing the <code>command</code> directive so that rather than running the app it just keeps the container running indefinitely, and adding a <code>volume</code> directive so that rather than using the source baked into the container it reads our local copy on the host OS so we can hack using our preferred editor.</p>

<p>This turns the container into our dev system, hooked to all the dependency containers, isolated from our host OS, and fully loaded with all the libraries our app depends on.</p>

<p>We can then hop into the running container to interact with our code as we update it by using the <code>docker exec -it &lt;container_id&gt; /bin/bash</code> command. Or in this case we can use the make target for just this purpose and instead run: <code>make dev</code>.</p>

<h6 id="themakefile">The Makefile</h6>

<p>This provides a lot of convenience tools for interacting with the dev environment. This could be done with any similar tool like <code>rake</code> or <code>grunt</code> or <code>yarn</code> or whatever the cool kids are using now.</p>

<p>Some useful patterns are things like the DB migration target <code>make setup-db</code>. This starts up the db containers then manually runs the app container with the necessary parameters to link with the databases and execute the initial migrations. This could be done with yet another docker-compose override file but those grow numerous quite quickly. Note that this pattern is the reason the docker network is created externally (by <code>make setup</code>) rather than implicitly by docker-compose, so that we can link stand alone containers to those running in docker-compose.</p>

<p>Other handy dev targets are <code>make testdata</code> for generating canned API calls to our app and <code>make nuke</code> to completely blow the environment away when we inevitably screw it all up.</p>

<h6 id="finalthoughts">Final Thoughts</h6>

<p>As usual this is mostly an exercise in capturing my thoughts for future reference but hopefully someone besides future me will find this helpful. I intend to use this repo as boilerplate for new projects so it should see at least a bit of upkeep here and there as I hack on stuff. I have also been tinkering with redeploying some of my personal projects in containers so I will likely have a follow up post sooner or later about the trip from dev to prod.</p>

<p>Also, I got the job so I guess I must have done something right!</p>]]></content:encoded></item><item><title><![CDATA[a quick and dirty orbnext hack]]></title><description><![CDATA[<p>This Christmas my brother and his wife sent me an <a href="http://www.orbnext.com/">ORBNext</a>, a very nerdy gift somewhat akin to a Philips Hue but more hackable by virtue of being built on the <a href="https://electricimp.com/">Electric Imp</a> platform. The most interesting part of which is the "blink up" technology which involves an app on</p>]]></description><link>https://rob.salmond.ca/a-quick-and-dirty-orbnext-hack/</link><guid isPermaLink="false">9b9d2b26-a6e5-49d2-8253-1cecb84a7944</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Fri, 20 Jan 2017 05:36:21 GMT</pubDate><content:encoded><![CDATA[<p>This Christmas my brother and his wife sent me an <a href="http://www.orbnext.com/">ORBNext</a>, a very nerdy gift somewhat akin to a Philips Hue but more hackable by virtue of being built on the <a href="https://electricimp.com/">Electric Imp</a> platform. The most interesting part of which is the "blink up" technology which involves an app on your phone pulsating the screen brightness while the imp sits atop it reading the pulses with a photosensor. The result is a device with no screen and no buttons which can be connected to your wifi with less hassle than a chromecast which is pretty cool.</p>

<p>The mobile app has some basic functionality built in, set the colour based on the temperature or the price of your favourite stock symbol and that sort of thing. It also has <a href="https://ifttt.com/">IFTTT</a> integration which is a handy service I've used for many years. Unfortunately it's not <em>quite</em> as useful as I'd like. For example if you trigger a "blink the light" event by some action the light just keeps blinking until you intervene which I'm not thrilled about.</p>

<p>But then if you got a problem with a hackable light you go ahead and you hack it.</p>

<p>I noticed that the response time between my pressing a button on the mobile app and the light updating itself was extremely fast, so fast that I assumed the app had to be communicating with the light directly over wifi. I fired up <a href="https://ettercap.github.io/ettercap/">ettercap</a> and ran a MITM attack between my phone and the light (MITM on a light, this is the world we live in?) but found no traffic going between them.</p>

<p>Next I ran it between the light and my router looking for it's control channel. I found that it established a TCP connection to <code>imp02b.boxen.electricimp.com:31314</code> but couldn't make heads or tails of the binary protocol in use so I moved on to the app.  Unfortunately the mobile app was communicating via HTTPS so I switched over to <a href="https://mitmproxy.org/">mitmproxy</a> so I could see inside the encrypted traffic.</p>

<p>Here's what I found.</p>

<p><img src="https://rob.salmond.ca/content/images/2017/01/app-capture.png" alt></p>

<p>The app was doing a nice simple POST of some straightforward form data to control the light. During setup each unit produces a unique device code which is used to identify it, this is the URI being addressed.</p>

<p>It seemed like it'd be easy to replicate so I set about <code>curl</code>ing to see if I could do it but it ended up getting late and I found myself frustrated by countless 404 responses. I replicated every header, the user agent, even went so far as to script up a request that would copy the lower case 'h' in the <code>host:</code> header with no success.</p>

<p>My request looked absolutely identical and still wouldn't work. I gave up and went to bed.</p>

<p><img src="https://rob.salmond.ca/content/images/2017/01/replication-fail.png" alt></p>

<p>As is often the case when I looked at the problem with fresh eyes today the answer jumped out at me. The <code>Content-Length</code> in my spoofed request was way off, lots more data than the app was sending. I compared the payloads in hex and found the problem.</p>

<p><img src="https://rob.salmond.ca/content/images/2017/01/hex-comparison.png" alt></p>

<p>The trick it turns out is to send the payload with an <code>application/x-www-form-urlencoded</code> header but not actually URL encode it. I couldn't figure out how to make cURL or my HTTP <a href="http://docs.python-requests.org/en/master/">client library of choice</a> do something so stupid but of course urllib2 has so such qualms.</p>

<pre><code class="language-python">import urllib2  
import json

YOUR_DEVICE_CODE = '&lt;thing goes here&gt;'

URL = 'https://agent.electricimp.com/{}'.format(YOUR_DEVICE_CODE)

def set_color(color):  
    data = {"program":"Demo","color":"#{}".format(color)}
    req = urllib2.Request(URL, json.dumps(data))
    response = urllib2.urlopen(req)
    foo = response.read()

if __name__ == '__main__':  
    set_color('FFFFFF')
</code></pre>

<p>And since it works from anywhere on the internet now I can activate disco mode when I'm not even home!</p>

<p><blockquote class="imgur-embed-pub" lang="en" data-id="NMN9V5h"><a href="//imgur.com/NMN9V5h"></a></blockquote><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script></p>]]></content:encoded></item><item><title><![CDATA[my youtube recommendations]]></title><description><![CDATA[<p>I watch a lot of youtube. Like a <em>lot</em>. Far more than TV, netflix, or movies. I'm always looking for <a href="https://www.reddit.com/domain/youtube.com/">ways to find</a> good stuff to watch and new channels to subscribe to since youtube recommendations haven't been worth a damn in quite some time.</p>

<p>Since I'm often mentioning interesting</p>]]></description><link>https://rob.salmond.ca/my-youtube-recommendations/</link><guid isPermaLink="false">89c26286-9a38-484f-b17b-2c3013e150ee</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Tue, 17 Jan 2017 09:01:00 GMT</pubDate><content:encoded><![CDATA[<p>I watch a lot of youtube. Like a <em>lot</em>. Far more than TV, netflix, or movies. I'm always looking for <a href="https://www.reddit.com/domain/youtube.com/">ways to find</a> good stuff to watch and new channels to subscribe to since youtube recommendations haven't been worth a damn in quite some time.</p>

<p>Since I'm often mentioning interesting things I've watched I've had a few friends recently ask me for channel suggestions so here we go. My first listicle! In no particular order here are a bunch of youtube channels that I find interesting.</p>

<p><a href="https://www.youtube.com/user/EconomistMagazine">The Economist</a></p>

<p>Better than you'd think, even if you are interested in economics already.</p>

<p><a href="https://www.youtube.com/user/AtGoogleTalks">Talks at Google</a></p>

<p>Google hosts a dizzying array of experts to give talks on a wide variety of subjects.</p>

<p><a href="https://www.youtube.com/user/crashcourse">Crash Course</a></p>

<p>Learn <em>everything</em>.</p>

<p><a href="https://www.youtube.com/user/thephilosophytube">Philosophy Tube</a></p>

<p>Just what it says on the tin.</p>

<p><a href="https://www.youtube.com/user/schooloflifechannel">School of Life</a></p>

<p>Just what it says on the tin.</p>

<p><a href="https://www.youtube.com/user/vice">Vice</a> and <a href="https://www.youtube.com/user/vicenews">Vice News</a></p>

<p>Journalism with f bombs, shit you won't see on MSNBC.</p>

<p><a href="https://www.youtube.com/channel/UC-a4MiIis33-Z9vLYFerN6g">Singularity Lectures</a></p>

<p>A mix of scientists and dreamers speculating about how current state of the art might play out in the future.</p>

<p><a href="https://www.youtube.com/user/thehealthcaretriage">Healthcare Triage</a></p>

<p>A pracicing pediatrician, researcher, columnist, and author sets you straight about health care.</p>

<p><a href="https://www.youtube.com/user/ScienceMag">Science Magazine</a></p>

<p>For real. The actual <a href="https://rob.salmond.ca/my-youtube-recommendations/www.sciencemag.org">Science</a> magazine.</p>

<p><a href="https://www.youtube.com/user/TheRoyalInstitution">The Royal Institution</a></p>

<p>For real. The actual <a href="http://www.rigb.org/">Royal Institution</a>.</p>

<p><a href="https://www.youtube.com/user/voxdotcom">Vox</a></p>

<p>Context on current events.</p>

<p><a href="https://www.youtube.com/user/CaspianReport">Caspian Report</a></p>

<p>This guy explains history and geopolitics. If the news tells you what happens, he can tell you why.</p>

<p><a href="https://www.youtube.com/channel/UCL5kBJmBUVFLYBDiSiK1VDw">Channel Criswell</a></p>

<p>Digging deep into cinema.</p>

<p><a href="https://www.youtube.com/channel/UCWTFGPpNQ0Ms6afXhaWDiRw">Now You See It</a></p>

<p>Digging deep into cinema.</p>

<p><a href="https://www.youtube.com/user/everyframeapainting">Every Frame a Painting</a></p>

<p>Digging deep into cinematography.</p>

<p><a href="https://www.youtube.com/channel/UCggHoXaj8BQHIiPmOxezeWA">History Buffs</a></p>

<p>Comparing historical cinema to historical facts.</p>

<p><a href="https://www.youtube.com/user/coldfustion">Cold Fusion</a></p>

<p>Sorta tech history focused, back stories on tech companies, trends, press releases, etc.</p>

<p><a href="https://www.youtube.com/channel/UCutCcajxhR33k9UR-DdLsAQ">Complexity Academy</a></p>

<p>Complexity itself is a thing.</p>

<p><a href="https://www.youtube.com/channel/UC_cznB5YZZmvAmeq7Y3EriQ">Stated Clearly</a></p>

<p>Evolution and the latest ideas on the origin of life laid out in simple terms.</p>

<p><a href="https://www.youtube.com/channel/UC9OeZkIwhzfv-_Cb7fCikLQ">Deep Learning TV</a></p>

<p>Machine learning explained clearly.</p>

<p><a href="https://www.youtube.com/user/OfficialDerren">Derren Brown</a></p>

<p>A mentalist and skeptic screws around with peoples heads.</p>

<p><a href="https://www.youtube.com/user/MotherboardTV">Motherboard TV</a></p>

<p>A tech blog for people who are wary of tech.</p>

<p><a href="https://www.youtube.com/user/Nerdwriter1">Nerdwriter</a></p>

<p>Thoughtful video essays about current events and popular culture.</p>

<p><a href="https://www.youtube.com/user/s4myk">Samy Kamkar</a></p>

<p>Hacking explained.</p>

<p><a href="https://www.youtube.com/user/scishowspace">SciShow Space</a></p>

<p>Space is dope.</p>

<p><a href="https://www.youtube.com/user/thebrainscoop">The Brain Scoop</a></p>

<p>Behind the scenes at one of the worlds best natural history museums.</p>

<p><a href="https://www.youtube.com/user/enyay">Tom Scott</a></p>

<p>He goes to cool places and talks about things you might not know.</p>

<p><a href="https://www.youtube.com/user/WiredVideoUK">Wired UK</a></p>

<p>For some reason better than the main Wired channel.</p>

<p><a href="https://www.youtube.com/channel/UC7_gcs09iThXybpVgjHZ_7g">PBS Space Time</a></p>

<p>The most hardcore space and physics show I've ever seen.</p>]]></content:encoded></item><item><title><![CDATA[tablet as external monitor with i3wm]]></title><description><![CDATA[<p>tl;dr - read <a href="https://bbs.archlinux.org/viewtopic.php?id=191555">this thread</a>.</p>

<p>After experimenting with using nothing but chromebooks for a while to see how well I could operate with just a shell, browser, and cloud, I treated myself to a top of the line thinkpad last year and decided to give a tiling window manager</p>]]></description><link>https://rob.salmond.ca/tablet-as-external-monitor-with-i3wm/</link><guid isPermaLink="false">f62c50e4-4040-4ff2-b8c7-94134fdeeeca</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Mon, 09 Jan 2017 01:15:41 GMT</pubDate><content:encoded><![CDATA[<p>tl;dr - read <a href="https://bbs.archlinux.org/viewtopic.php?id=191555">this thread</a>.</p>

<p>After experimenting with using nothing but chromebooks for a while to see how well I could operate with just a shell, browser, and cloud, I treated myself to a top of the line thinkpad last year and decided to give a tiling window manager a try and installed <a href="https://i3wm.org/">i3wm</a>.</p>

<p>I've become fairly proficient with it and am quite happy with the user experience, but with two external monitors at home for easily moving workspaces around I <strong>really</strong> miss them when I'm out and about. </p>

<p>I also recently spent the holidays visiting family and as usual performed a bit of tech support. Couldn't figure out what was up with the iPad I gave my grandad last Christmas so this week I went out and got him a new one. Having it sitting around today and having plans to meet a friend for an afternoon of hacking at a local coffee shop I wondered if I could somehow use the iPad as an external monitor.</p>

<p>Turns out you can!</p>

<p>The trick is to use <code>xrandr</code> to define a virtual display device and then <code>xvnc</code> with the <code>-clip</code> flag restricting the shared viewport to the size of the virtual display to make it remotely visible. Then any old vnc client on the tablet will do the rest.</p>

<p>This is my config for an iPad 4.</p>

<pre><code>phro@shard:~/.screenlayout$ cat tablet.sh  
#!/bin/sh
xrandr  --output VIRTUAL1 --mode 848x1080_60.00 --right-of eDP1 --output eDP1 --mode  
 1920x1080 --primary --pos 0x0 --rotate normal
x11vnc -clip 848x1080+1921+0  
</code></pre>

<p>Here's how it looks.</p>

<p><img src="https://rob.salmond.ca/content/images/2017/01/Image-uploaded-from-iOS-1.jpg" alt></p>

<p>It works pretty great at home but out and about there are a few problems to figure out. First, many public wifi hotspots will dynamically create a small /31 network for each client which joins to prevent hostile users sniffing / spoofing / whatever the other folks. In that situation the tablet won't be able to connect to the VNC server, so I had to do a little network hopping before I could use it.</p>

<p>Another issue is latency. One network I tried out despite having ping times in the 10-20ms range to Google's DNS servers it was producing numbers > 1 second between devices on the network. Using the bloated uncompressed VNC protocol it took 15-20 seconds for a window moved onto the tablet to appear.</p>

<p>Also configuration is a pain in the ass. The VNC client I installed on the tablet has a bookmarking feature that let's you save hostnames and credentials for commonly accessed servers but of course if you're on a strange network the IP of the laptop will change meaning it's a bit less seamless to get started. </p>

<p>To work around all this I'm planning to grab a low profile USB wireless adapter to set up an ad-hoc network between the tablet and the laptop. And of course, my own tablet since I'll be shipping this one off to grandpa this week.</p>]]></content:encoded></item><item><title><![CDATA[sqlite to mysql with less jank]]></title><description><![CDATA[<p>I recently had a need to move some data from sqlite to mysql and didn't find a solution that suited me. There are some shady looking proprietary apps that do this, lots of janky sed scripts to munge a sqlite dump into mysql format, and I think the mysql workbench</p>]]></description><link>https://rob.salmond.ca/sqlite-to-mysql-with-less-jank/</link><guid isPermaLink="false">f28b042f-0c35-4b3c-936f-d801588f8a4f</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Mon, 05 Dec 2016 07:32:52 GMT</pubDate><content:encoded><![CDATA[<p>I recently had a need to move some data from sqlite to mysql and didn't find a solution that suited me. There are some shady looking proprietary apps that do this, lots of janky sed scripts to munge a sqlite dump into mysql format, and I think the mysql workbench might do it but I wasn't prepared to wrestle with that thing. </p>

<p>I wanted a simple tool for a simple task so I wrote one and called it <a href="http://github.com/rsalmond/datahoser">datahoser</a>.</p>

<p>It's built atop the SQLAlchemy <a href="http://docs.sqlalchemy.org/en/latest/core/reflection.html">reflection system</a>, creates databases, tables, and inserts rows, and has a simple but thorough verification step after data has been copied.</p>

<p>It's not quite ready for prime time as it relies on an unreleased SQLAlchemy bugfix and could use a bit of tidying up but it functions as advertised. It also has some examples of how to convert between non-native data types in case your source database uses a type not available in the target DB. In theory it should copy data to or from any RDBS supported by SQLalchemy though I've only tried it on sqlite and mysql so far.</p>

<p>I'll throw it up on pypi when I'm able. If you need to use it lemme know how it goes.</p>

<p>In unrelated news my blog <a href="https://rob.salmond.ca/they-got-science-in-the-fiction-now/">turned ten years old</a> yesterday. I still agree with my original assessment. Accelerando is a hell of a book.</p>]]></content:encoded></item><item><title><![CDATA[i left my heart in san francisco]]></title><description><![CDATA[<p>Not really, it was actually just my wallet.</p>

<p>Last week I visited San Francisco for the first time. I was there for work and those obligations consumed the bulk of my time but I arranged to spend a couple extra days in town to play tourist. Friday was the first</p>]]></description><link>https://rob.salmond.ca/i-left-my-heart-in-san-francisco/</link><guid isPermaLink="false">2c942696-05a5-4fe9-90cf-6494c8ee72aa</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Mon, 31 Oct 2016 05:43:22 GMT</pubDate><content:encoded><![CDATA[<p>Not really, it was actually just my wallet.</p>

<p>Last week I visited San Francisco for the first time. I was there for work and those obligations consumed the bulk of my time but I arranged to spend a couple extra days in town to play tourist. Friday was the first day I had free so I did the typical stuff. Walked the Embarcadero. Stuck my nose in touristy shops. Ate greasy food. Checked out Pier 39. Took a photo of the golden gate bridge.</p>

<p>Walked up and took a look at Lombard street, said "fuck that" and went elsewhere. </p>

<p>That evening I met an old colleague for drinks at the Adler Museum cafe in North Beach, a fantastic dive. I also drank some wretched stuff the locals drink called <a href="https://en.wikipedia.org/wiki/Fernet">Fernet</a>, it was awful. Stay away from it.</p>

<p>After he went on his way and another coworker who had spent the afternoon wandering about with me headed to the airport to fly home I set about tackling one of my favourite things to do in any city, but specifically a new city. Finding a bar to knock back some drinks and make friends.</p>

<p>I wandered into Buddha Bar in Chinatown, ordered a cocktail and a shot, and struck up a conversation with a couple visiting from Florida. We tried to make sense of a crazy game called "Liars Dice" the bartender was teaching anyone who seemed interested. After they left another group of folks sat down next to me and started playing. We chatted. I ordered more shots and cocktails. They invited me to join them in crashing some party at a nearby hotel.</p>

<p>I accepted. The night was starting to get interesting!</p>

<p>I recall walking some distance to this party. It turned out to be a fancy halloween party at a really nice hotel. I was very underdressed but it didn't seem to matter. The details of this party are somewhat obscured at this point by the copious rounds I'd been sharing with my new friends. I recall talking to a woman dressed as a bird and a guy in a gorilla suit buying me a glass of some scotch that seemed ludicrously expensive.</p>

<p>At some point I decided to say goodnight and head back to my AirBNB. Outside the hotel was a queue of cabs, I grabbed one and left.</p>

<p>Without my wallet.</p>

<p>In a brilliant stroke of luck, at some point in the evening I had bought a drink with my credit card and slid it into my hip pocket instead of back into my wallet. When the cabbie realized I had no cash to pay him he took me to an ATM at a grocery store to try to get a cash advance. My Canadian card wouldn't play ball with the American ATM.</p>

<p>The cabbie left me there.</p>

<p>I'm not exactly sure why I went back downtown at that point, maybe I was hoping to find the hotel and try to locate my wallet. Maybe I could get into the office and crash on a couch. At any rate I had no idea where I was or where my AirBNB was and for some reason I aimed for the biggest buildings I could see and started walking.</p>

<p>For two hours (more on how I know that later).</p>

<p>Somehow I wound up on Russian Hill back in North Beach, then I wandered downtown. My phone was dead by this point. I was sobering up and exhausted, I'd walked almost 20km that day. I started to try to decide by what criteria I should select a doorway to pass out in.</p>

<p>I spotted a guy standing around outside a fast food joint with some friends looking at his phone and in a moment of desperation I approached him and asked if he could look up directions to where I thought my AirBNB was. He graciously did so and informed me it would be a <em>fifty</em> minute walk back to Potrero hill.</p>

<p>He then did an amazing thing and hailed me an Uber and sent me safely back to sleep the night off. I recall only that his name was Sean, I owe you big time man, thank you! I reached out to Uber to see if they can figure out who this generous soul was so I can throw him a few bucks for saving my tail. If they work it out I'll update this post.</p>

<p>Hungover the next morning I set about calling and cancelling all my cards. I also cracked open my laptop to check in for my flight home that evening but found I wasn't able to. I double checked the booking.</p>

<p>I'd missed my flight home. It was actually booked for the night before. No idea how I bungled that one.</p>

<p>In the middle of calling the bank <em>back</em> to desperately ask them to un-freeze my now locked accounts so that I could try to get a new flight, the host of the AirBNB knocked on the door to tell me I'd stayed far past check out and she needed to clean up for the next guests.</p>

<p>It is fair to say that at this point I was freaking out.</p>

<p>I apologized and threw on some clothes, grabbed my bags and was out the door in minutes. I wandered to a nearby diner to sit down and make a phone call and ask somebody for a really big favour.</p>

<p>I called my mother. "Hi Mom, I'm in trouble. I'm trapped in a foreign country with no money. Want to buy me a flight home?".</p>

<p>The conversation evolved from there but we hit a snag trying to purchase the flight, some anti-fraud thing was mucking things up.</p>

<p>I called my brother. "Hey bro, I'm in trouble ..."</p>

<p>He and his wife sorted me out and I made it home later that day. Thanks you guys, I owe you big time!</p>

<p>After a quick change and a shower at home I headed out to a Halloween party to tell the story of the unfortunate and mysterious night out. Lots of questions came about that I didn't have answers to. Who were the people who'd brought me to the party? What hotel was it in? Where had the cab dumped me?</p>

<p>I realized today that I might have some photos in my phone from that night, possibly even geotagged. I checked but all I had were a few blurry shots of the Buddha Bar.</p>

<p><img src="https://rob.salmond.ca/content/images/2016/Oct/IMG_20161028_204432.jpg" alt="Buddha Bar"></p>

<p>It occurred to me though that Google might know where I'd been, and it turns out it did. Google location services was turned on in my phone so there's a detailed log of my movements.</p>

<p>Here I am walking from Buddha Bar to the fancy party, as it happens my memory of that was pretty accurate. The Fairmont San Francisco is a <a href="https://www.google.ca/search?q=fairmont+san+francisco&amp;tbm=isch">beautiful hotel</a>!</p>

<p><img src="https://rob.salmond.ca/content/images/2016/Oct/buddha-fairmont.png" alt="party crashin"></p>

<p>Here's me about an hour later in a cab, turns out he did take me to where I was staying and then mere blocks away to try to get some cash. </p>

<p><img src="https://rob.salmond.ca/content/images/2016/Oct/cab-burn.png" alt="burned"></p>

<p>Once he abandoned me there (I do feel bad about burning him on the ride) if I'd known where I was I could have walked back in minutes. Instead here's what I did for the next two hours.</p>

<p><img src="https://rob.salmond.ca/content/images/2016/Oct/wandering.png" alt="wandering"></p>

<p>Yes I did wander mindlessly through the infamous <a href="https://en.wikipedia.org/wiki/Tenderloin,_San_Francisco#Crime">Tenderloin</a> in the wee hours of the night despite being repeatedly told to stay out of it. Nothing interesting happened though, another stroke of good luck.</p>

<p>It turns out I actually fairly faithfully retraced my steps. Never did find my wallet, I called the Fairmont today. No luck there either. Ah well. I got a wild story to tell at least.</p>

<p>If you want to see what google knows about where you've been you can <a href="https://support.google.com/maps/answer/6258979">try it out</a> for yourself.</p>]]></content:encoded></item><item><title><![CDATA[FLOSS has won]]></title><description><![CDATA[<p>"People still carry around macbooks and winbooks in their bags but they use them as dumb terminals to talk to FLOSS powered VM's, running in FLOSS powered containers, with FLOSS powered backends, routed on FLOSS firmware, analyzend and developed with FLOSS toolchains, and obsessively checked by people who are carrying</p>]]></description><link>https://rob.salmond.ca/floss-has-won/</link><guid isPermaLink="false">a1d0de79-3308-4f8e-9657-15921550e5b3</guid><dc:creator><![CDATA[Rob Salmond]]></dc:creator><pubDate>Fri, 21 Oct 2016 16:59:21 GMT</pubDate><content:encoded><![CDATA[<p>"People still carry around macbooks and winbooks in their bags but they use them as dumb terminals to talk to FLOSS powered VM's, running in FLOSS powered containers, with FLOSS powered backends, routed on FLOSS firmware, analyzend and developed with FLOSS toolchains, and obsessively checked by people who are carrying around pocket superpowers; either GNU Linux powered Android devices or BSD powered iOS devices.</p>

<p>FLOSS is <em>everywhere</em>."</p>

<iframe width="560" height="315" src="https://www.youtube.com/embed/FfaVwkRdwuk" frameborder="0" allowfullscreen></iframe>]]></content:encoded></item></channel></rss>