4.14.2014

all your face are belong to us

For a few minutes while I flipped through the Space Glasses web site and watched their video I thought it was a joke.


This quote on their landing page doesn't cultivate much believability.


But of course it's real, and like Glass they're concious of the implicit privacy concerns from the outset. "Meta prototypes include a front-facing LED that will let others know when you’re recording the world around you." Even as the discussion goes on about surveillance and privacy being a Big Deal there are interesting experiments in using AR to enhance your own privacy and invade the privacy of others under way.

I give it about five minutes before someone combines an AR platform with something like the Carnegie Mellon adaptive headlights and an infra red camera jammer for active privacy management.

3.09.2014

ghost is kinda like ... pretty fast

A while back I tossed a few bucks at the Ghost blogging platform kickstarter fund, and even though it's been available for a while I hadn't gotten around to trying it out until recently. Tomorrow night my brother Terry will be appearing on Top Chef Canada and last week I noticed that other contestants were getting some retweet love from the popular Food Network twitter account but he was missing out. I offered to get him set up with a site he could get some google juice flowing to and figured it was a good opportunity to give Ghost a try.

My experience with NodeJS hasn't been fabulous, but nvm has taken some of the sting out of the process so once I found a version that Ghost seemed to play well with (v0.10.26) it was a pretty smooth set up. It was about two hours from firing up a VPS to live. There's not much there of course, I just copied a few posts from his old blog over by hand and gave him creds to start posting. Then, anticipating his rise to star status chef I decided to look at performance.

I should preface this by pointing out if I haven't already that my day job is pretty much all about web app performance (if you're worried about that sort of thing you're doing yourself a disservice not having a free Traceview account!). I've spent the last ten months setting up and looking at performance metrics for countless websites built on a wide variety of platforms, so I feel like I can speak with at least some authority on this subject.

For most web apps load time is measured in seconds, this is the web most of us surf every day. Good apps have latency averages in the sub one second range. Even 900ms is better than most. Apps with really stellar performance will service the bulk of their requests in the 100-200ms range, these are apps which are really well tuned but providing this performance across all manner of requests is nearly impossible. POSTing data is slow, querying large data sets takes time. Providing serious functionality pushes most complex apps out of this range.

With nothing but nginx as a reverse proxy and Ghost running in a single process on a very low traffic blog the average load time for this site is 19ms! That's pretty much cached static content speeds. I put it under load with a few minutes of 500 concurrent requests and it "ballooned" to 1.7 seconds. A full page cache between node and nginx should dispatch the bulk of that latency with very little effort.

I'm absolutely floored by this thing. So, I will be moving this blog over to Ghost as soon as time allows. I want to be sure Terry's stuff is nice and stable and take my time to port all my content over as best I can but I just can't say no to numbers like that. I'd be impressed to see a fully cached Wordpress blog put up numbers in that range.

If you're in the market for a platform Ghost gets my solid stamp of approval, I paid a bit less than the five bucks / month it costs for a managed and hosted Ghost blog for the VPS I'm running it on but I'm fussy about details. Assuming their hosted solution is as good as the free one you can't go wrong shelling out for it.

And obviously, tune in to Top Chef Canada tomorrow night and watch my brother run rings around the competition!

2.23.2014

chromebookin it

Along with aches, sun burns and copious insect bites one of the bumps along the way during my trip a couple summers ago was my trusty Thinkpad dropping dead on me as I made my way over to Vancouver Island. My dad very kindly donated his old Acer which kept me going for a while. Later when I found some work and a place to stay in Victoria a friend gave me a slightly beat up but more powerful Sony Vaio which while not very portable was a great machine for a freebie.

Until last weekend, when after another round of abusing my friends via photoshop (and being soundly punked in return) the magic blue smoke was released and the Vaio booted no more. I ran out on a Sunday afternoon to see what I could find for a replacement and after talking myself out of throwing the better part of a pay cheque down on another Thinkpad I walked out of a Best Buy with a neat little Arm powered Samsung chromebook.

A friend who recently grabbed an x86 model suggested that rather than go with my gut and hose ChromeOS in favor of a full blown linux install I should give ChromeOS a try, but as I am wont to tinker I ignored this advice and set about screwing around with it. The preferred methods of supplanting ChromeOS seem to be Crouton or ChrUbuntu. The first lives in a chroot along side ChromeOS kernel and the second seems to do some weird munging of ChromeOS components into a weird Ubuntu image, neither of which appealed to me very much.

Instead I grabbed a couple of cheap SD cards and tried the very detailed Arch install instructions and then gave the even easier (dd this img file and go) Debian instructions. I'd never used Arch before and was quite pleased when I encountered their smooth wifi setup tools. Debian reminded me how to find the man pages for wpa_supplicant, but both were pretty straightforward.

After all that I discovered why people have been doing "weird" hybrid things with ChromeOS components rather than making full blown replacements. As usual, it's the fucking graphics drivers. The Exynos 5 chip in this machine has a Mali T604 GPU with a small number of shaders and provides a nice jank-free Youtube and Netflix experience in ChromeOS but vesafb while it works just as advertised, isn't quite up to those tasks. Although this video shows some promising WebGL performance with both. Personally I didn't have much luck with video playback under vesafb, maybe there's a way to get software scaling going but I couldn't suss it out.

Faced with this I briefly flirted with the idea of sticking with stock ChromeOS and limping along with the nifty dev tools available but since everything is mounted as noexec it's kinda pointless unless you're building and flashing ChromiumOS yourself.

The issue seems to revolve around a driver called "armsoc" which looks like it was forked from some OMAP thing a while back and seems to be under active development with the chromiumos project. I'm not exactly sure what the deal is with this thing that everyone is copying binary's around but I suspect it has to do with xorg ABI versions or some such nonsense. Arm also seems to provide closed binary blobs as well as open drivers which I haven't messed with yet but I expect will be disappointing for all the common reasons.

I haven't yet figured it all out but I did find my way to the limadriver project. It's a full on free driver for the Mali GPU family and seems to have an amusing backstory including a 16 year old core contributor so I think I'll give that a try. It seems more my speed.

In the long run though I see this machine as a great thing to have in my bag all the time but I expect I'll probably get a real machine again at some point. Assuming I can find one with a genuine English keyboard. Seriously if it's that hard to figure out which machine to ship to which province how does anyone in Europe buy a computer!?

2.11.2014

where is the plan for link bait?

In 2003 I landed my first desk job. Among my responsibilities in that role was the care and feeding of a pretty crummy proprietary Windows based mail server. One mandate that came to consume countless hours of my life was to ensure that "as little spam as possible made it to the users, but NO business mail was ever blocked".

This server had some pretty convoluted filtering options that I learned the ins and outs of (I had to plead on their mailing list for regex support because it would have added "too much overhead"). Every day users would forward spam that found it's way to their inboxes to me, and I would scour through the blocked messages for anything business related and forward stragglers to the intended recipients. I'd then take the false positives and the false negatives and update the rules using the tools available to me. By the time I left that job this process would consume three to four hours of my work day, every day.

I loathe spammers.

After some time I integrated spamhaus black-holing into our systems. It is (or was then, I haven't used it in ages) a system that accepts forwarded spam from large numbers of users and then adds offending sources to a block list you can automatically load into your mail server to try to keep up. It was certainly not perfect and I still had to look for false positives but it cut down on false negatives.

Later in that year of firsts as you may be guessing I also encountered Bayes method for the first time via Paul Graham's essay A Plan for Spam. I didn't understand much of it (stats are still not my strong suit) but I knew I'd found some powerful geekery. It made me feel the way I had when I first discovered OS level API's and later real mode instructions; deeper magic was waiting for me to understand it. I printed it out and kept it next to my crapper with other papers which would take multiple passes to slowly grok.

A year later Gmail would come along and use Bayes to essentially solve the spam problem for me and the rest of the internet, but by then I'd put that battle behind me.

Around that same time RSS (and later Atom) was being dreamed up and shortly after that feed aggregators came along and brought us a new kind of inbox with new kinds of problems. Machine learning would help again with the new problem of prioritizing large amounts of content. But as the number and variety of feeds increase the common implementation lacking a manual override caused it's own issues.

Maybe you never want to miss a post on a particular news feed, or maybe the submission you care about most on Hacker News today didn't receive a single upvote. The Spam or Not Spam classifier solutions don't work as well for the question of Interesting or Not Interesting. There are other ways to approach this problem but it's not yet solved in the way that spam is solved.

The unsolved problems of interesting or not interesting (Digg, Reddit, Twitter, your Facebook feed, etc) and in some ways relevant or not relevant (search) are vulnerable to being undermined. In these spaces (as with spam before them) the fact that more eyeballs means more revenue makes short cut optimizations like some types of SEO or link baiting worth pursuing.

Search and social media have mitigated the issue somewhat by providing things like Google Ad Words and promoted Tweets but the immense value to be gained from having something that's not an ad but not quite organic either go viral far outstrips that of shelling out to put your copy in front of some demographically plausible potential customers. In those transactions it's far better to be the venue for the ad being placed than it is to be the one buying it, better to provide the valuable service or relevant content than to try to ride it's momentum.

The space in which this provider of eyeballs / consumer of attention power struggle is happening is being aggressively explored for advantage on all sides, and in true internet form is being iterated at a staggering pace making even my info addled head spin.

The life cycle of a linkbait 0 day is going from multimillion dollar idea to passe joke to fairly interesting content faster than I can keep up.

Facebook has experimented with various solutions, this one may help you preserve a Facebook friendship.

(I didn't have to search for this screenshot, it was at the top of my feed)

All or nothing is a start, but everyone has grudgingly un-followed someone they genuinely find interesting because their signal to noise ratio was too high. Where is the "Plan for Spam" for wading through content?

Like any good fiend my info addiction sets my blood itching when my junk is diluted with cutting agents, but how to scratch!? Viral doesn't necessarily mean I will or won't like it, popularity among like minded people doesn't either and link bait doesn't always equal uninteresting. Your favourite hand curated collection of content won't universally produce things to your taste but maybe you will sit and read every word of every post on your sisters blog. A black box feed aggregator or a crowdsourced social news site offers minimal control for manual adjustment to how things are prioritized. "Unsubscribe from the default subreddits" is such a common suggestion to improving the Reddit experience it may as well be the default.

At least Twitter starts you at zero and lets you build your own prison. Although based on the number of times that my father has said of Twitter that "no one cares what you had for lunch", to which I reply that just like when deciding who to befriend IRL if you follow boring people you're going to have a boring news feed makes me wonder what the average Twitter experience is like.

There is a void in my internet. One that's pissing me off and frequently dominating my thoughts. Experience has taught me that this usually means two things are also happening. If it's bugging me and I'm thinking about it, then it's bugging smarter people who are also thinking about it. And if smart people are thinking about a problem facing the internet, someone is in the process of cooking up a solution right now.

So where the hell is it?

7.07.2013

hubot meet stashboard

For most of the years I've been working in software nearly everything I've written hasn't been visible to the world. Somewhat because it would probably only be of interest to this guy, but mostly because it was proprietary and owned by a massive corporation.

With my recent change of locale I've enjoyed going from coding in an airlock (bank), only slightly better than coding in a vacuum, to to coding in a more open ecosystem. I'm digging the social coding but more than that I'm really loving working with stuff that I don't need to jury rig to bypass weird limitations (ever tried writing self modifying JCL? shudder).

One of the more entertaining pieces of software we use is Hubot, a chat bot whose functionality is readily extended with add on scripts; though a co-worker recently described him as a "charming nuisance" which is more or less true. I've been thinking he could be more productive (automate ALL THE THINGS!) but hadn't settled on here to start when a recent service outage put an idea in my head.

During a minor database snafu some of the team discussed ways to get out in front of the issue with users to make sure people knew we were on top of things. Emails and blog posts were put out there but afterword we realized that the people who had the credentials to update our Stashboard were the same ones who were needed to fight the fires.

So I spent some time setting up a Stashboard, installing a Hubot to play with and learning Coffeescript. That last bit was interesting, I haven't written Javascript in about a hundred years, it seems the cool kids are all on about this asynchronous shit that took me a while to wrap my head around. This ancient blog post helped with that.

I ended up producing a pretty decent interface between the two which I've added to our fork of hubot-scripts. Hopefully somebody else out there finds it useful.