Generating a SSL key + CSR

Because I’ve had to look it up multiple times

Generate the private key
openssl genrsa -out domain.tld.key 2048
then, generate the CSR
openssl req -sha256 -new -key domain.tld.key -out domain.tld.csr

I’m certain there’s a one liner to do this, but didn’t find anything while looking (briefly).

, , ,

No Comments

Cloudflare & Python’s urllib

TL;DR: Trying to diagnose why my copr builds were abruptly failing, I found an interesting thing: Cloudflare’s Browser Integrity Check apparently doesn’t like Python’s urllib sending requests.

The symptoms in Copr were weird: builds would try importing, and then fail with no log output. To me, trying to diagnose the problem made no sense – I could download the file just fine through Chrome, Firefox and wget, so I thought the issue was with copr. I was all set to file a bug with them, when I decided to look at what other people were building and see if URL importing worked for them.

Surprise surprise, it did. This clearly meant it was isolated to me and my server, so I held off on the bug filing until I could find out more. On my stuff things accessing Jenkins worked fine, so I tried other tools. said there were no issues accessing Jenkins, while RequestBin said… not much at all, I couldn’t get it to work with copr’s URL detection – copr wants the path to end with src.rpm, requestbin uses the path to determine which bin to route stuff to.

Next I looked at Jenkins behind an Nginx proxy. I found an extra Nginx option needed to be set to allow Jenkins CSRF to work properly, which was a plus. (I also found something on serving static files through nginx instead of jenkins, which is now part of my todo list.) I ran through steps on DO’s setting up Jenkins behind nginx tutorial just in case I had missed something, everything looked fine.

Without logs, I couldn’t really trace down what was happening, so I looked at using Github to host the SRPMs, since Github clearly worked. First thing I realized was that keeping 3 branches in an attempt to reduce the number of repos I spawned was a bad idea, and I should fix that. (and maybe follow the git flow strategy…) GitHub has a releases API, so I’ll be spending some time in the near future with it (also, streaming uploads for the 100MB+ pagespeed releases…) Or try the github-release script that someone wrote. Either way, I’d had enough for the night, and stopped.

Today I picked it back up, and took another look. Still thinking it was my server, I tried to isolate which component was failing by serving the src.rpm file on a different domain. Instead of downloading the file and sticking it in a folder, I symlinked a folder to the Jenkins’ workspace directory, and submitted a build. Surprisingly, it worked, so I took a look at my nginx logfile, which revealed that the user agent of what downloaded it was Python-urllib/1.17. That looked like something from the Python stdlib that I could try to replicate the issue with. Started up ipython, googled for python urrlib downfile, and found the urlretrieve function. Tried it against the alternate src.rpm URL, and it worked. Tried it against Jenkins directly, and it failed.

Now I could replicate the symptoms! Next step was to google “Cloudflare blocking urllib”, which suggested that the (lack of) headers was tripping Cloudflare’s Browser Integrity Check. Toggled it off in Cloudflare, and sure enough I could now get the src.rpm file through urllib.

Deciding I now had enough info to file a bug, I went to the Copr codebase to find the relevant line. Downloaded the latest release, unzipped it to grep the code, found the file and line, went to grab the line from the Git repo – and couldn’t find the line. Turns out between last week and this week, the bug was fixed, just not pushed to Copr.

So for now I’ll be regenerating all the missed versions of nginx with the integrity check disabled. Fingers crossed there’s nothing else wrong with Copr in the meantime…



, , ,

No Comments


Just bundled it into copr, so now there’s a yum repo for fedora 20, 21, 22 and rawhide & CentOS 6+7 –

Amusingly, Rawhide changed the ABI, which the configure script had problems with. But I found a solution, which is going into ngx_pagespeed – Documentation fix until the code is brought up to the newer C++ standard.


  • Fix up the config file – centOS 6 doesn’t like the uncommented pid file set to /run/pid; – default for systemd, doesn’t exist for init
    fixed by commenting out the pid declaration in the config file
  • Add pagespeed-by-default config file (copying from Added config file, automatically gets included by virtue of being in /etc/nginx/conf.d
  • Rewrite spec to conflict with the base nginx Obsoletes nginx < 1.7.0, conflicts with nginx >= 1.7.0
  • Create the pagespeed cache folder as part of the install (specified by FileCachePath, normally /var/ngx_pagespeed_cache)Done as part of .spec file

Also fixed a SELinux boolean needing to be set – ngx_pagespeed needs the httpd_execmem permission

Things that helped:

  • Running rpmbuild without the rigid in-root-of-home-directory structure: rpmbuild -bs --define "_topdir $(pwd)" nginx-pagespeed.spec
  • Running mock with --no-cleanup-after and --no-clean, so I could investigate the build directory (located at /var/lib/mock/<release>/root/build/BUILD, or at least along those lines)

Spec files are now in a git repo as well:

Exhaustive list of libraries that may or may not help nginx:


No Comments

Building Nginx SRPMS

Companion to my earlier post, this actually has commands

Read the rest of this entry »


No Comments

Path to building Nginx Mainline RPMs for Fedora & CentOS

Or: How I spent an afternoon doing a deep dive into the RPM spec and solving a problem for myself

tl;dr – Nginx Mainline packages are being built for Fedora & CentOS at

My webserver’s running nginx 1.4.7, a version that hasn’t gotten non-bugfix attention since March 2013, according to the changelog. Oddly enough for a Fedora package, the version in koji is the stable branch – something that makes sense for CentOS/EPEL since that’s a long term support release, but not for Fedora. Doubly annoying was the fact that the packages for Fedora 20 (because Fedora 21 isn’t supported on OpenVZ yet) was one step behind the official stable – 1.4 instead of 1.6.
If I wanted mainline on Fedora, I was going to have to build it from source.
Read the rest of this entry »

, ,

1 Comment

My State of Serving, aka VPS recap

So I wrote a bunch about VPSes a few months ago, and what I thought my future looked like with them. Well, a bunch has changed since then, and will going forward, so let’s go:

Full out cloud hosting

Still good for buy-as-you-need systems, still not right for my usecase. Other than reading about price drops in The Register/on Twitter, I’ll be passing over them.

Cloudy VPSes

Haven’t seen too much movement on this. I’m going to lump RamNode ( into the Cloudy VPSes section, simply because they’re at 5 locations. Sheer scale. Also, their prices are approaching Digital Ocean level. I’d still go with DO though – The cheaper plans are OpenVZ based, not KVM as DO is.

Traditional VPSes

This is probably the most significant change – instead of migrating to a Digital Ocean droplet/Vultr node, I decided to go with Crissic.

I haven’t actually had my VPS with them go down over ~1 year (yet), and the weird SSH issues resolved themselves around December, so I decided to bite the bullet and extend my existing VPS plan with them. (Or should I say him – all the support tickets have been signed Skylar, so it’s looking like a one man operation.) Crissic is consistently coming in the top 5 in the LowEndTalk forum provider poll so that was additional validation.

With Nginx/PHP-FPM/MariaDB all going, I’m hovering around 300MB of RAM used. So I have a bunch of headroom, which is good.

I also picked up a bunch of NAT-ed VPSes – MegaVZ is offering 5 512MB nodes spread around the world for 20€. So I got them as OpenVPN endpoints. They’re likely going to be pressed into service as build nodes for a bunch of projects (moving Jenkins off my Crissic VPS), but those plans are still up in the air. We shall see…

Dedicated Server?

The most interesting thing I found was Server Bidding. Hetzner (massive German DC company) auctions off servers that were setup for people, but have since cancelled their service. Admittedly, most of their cheaper stuff is consumer grade (i7 instead of Xeons), but I can’t really complain. There’s an i7-3770 with 32GB of RAM and 2x 3TB drives going for € 30.25/USD$34 right now. And prices only go down over time (until someone takes it).

That mix of price/specs is pretty much unmatchable. KVM servers are ~$7/month for a 1GB (and roughly scale linearly), so I’d be looking at $28/month for 4GB of RAM if I were to go normal routes. And that’s totally ignoring the 2x3TB HDD (admittedly, I’d want these in RAID 1, so effectively 3TB only.)

No Comments

Ansible gotchas

  • Tasks do not like having the remote_user changed mid-playbook if you specify a SSH password
    • Specifically, having an ‘ansible’ user created as the first task, then using that for everything in the rest of the playbook doesn’t work because ansible will always attempt to use the declared password for the newly created user, which promptly fails
    • Solution: Separate runbooks!
    • People debate the usefulness of a separate config account, since it’s effectively root + key-based login. They have a point, since I can get the same security with disabling password-based auth.
  • If your inventory file is +x, ansible will attempt to execute it, even if it is the standard inventory list (.INI-format plain text)
  • I’m liking the look of the Ansible for DevOps book
  • Ansible will automatically import GPG keys during a yum install if the matching GPG key hasn’t been imported yet (Seen after installing EPEL, then installing a package from there)

No Comments

My state of VPSes

The VPS market is really really interesting to watch. Maybe it’s just me, but the idea of being able to get a year of a decent system for the price of a pizza is fascinating – and somewhat dangerous to my wallet. At my peak, I had 4 VPSes running at the same time – and each of them doing practically nothing. So I ran TF2 servers off the ones that could support it until the prepaid year was over.

It’s just so… attractive. My ‘own’ servers, without the expense of needing to buy and pizza boxes that sound like hurricanes and are apparently good places for cat puke. And now I find myself needing to get one to power everything in the near future because I’ve been unsatisfied with Dreamhost for about 2 years, and they finally pushed me over recently. So I decided to take notes with the target of migrating to having everything of mine on a VPS in December.

My usecase can be summed up simply as a always-on low traffic webserver, hosting a personal blog (WordPress., while I look at Ghost), MySQL database and Jenkins build server (them Lightroom plugins!). Somewhat price-sensitive – I’m not made of money, but at the same time I’m not going to sweat the price of a coffee at Starbucks. Bonus points for extra space for hosting friends’ sites though. They can pay me back by buying me a drink or something.

Cloud Hosting

Cloud hosting is hard to quantify – off-hand, the big three are Amazon, Google And Azure. But I’m going to expand my definition to cloud = anything that’s on demand, per-usage charges. Which opens it up.

The big 3 are pretty much unsuitable. Amazon’s still aiming toward companies that need to spin up and down instances – and quickly prices themselves out looking at the alternatives. Even looking at the price of the heavy use reserved instances, that’s $51+ hourly fees/year for the smallest system (t2.micro with 1GB of RAM & a throttled CPU). But wait, that’s actually $4.25 + $2.16 = $6.41 per month (assuming the current $0.003/hour charge stays the same) which isn’t so bad.

Except I had a wonderful manager in the past who pointed out that it’s not the cost of the compute power that was the killer when it came to building a company on EC2, it’s the cost of the outbound bandwidth. Data out is still an extra fee. Oh, and disk space is charged per GB as well. The wonders of all those options!

Google Compute Engine and Microsoft Azure are pretty much the same. Google wants $6.05 for their lowest system (0.6GB of RAM & 1 “Virtual Core”, likely aka throttled CPU much like Amazon); Microsoft just states an estimated monthly price of $13/month for a A0 instance, for 1 core and 0.75GB of RAM. Here’s a nice difference though: It comes with 20GB of disk.

So the big 3 are out for me (I write, being able to look back after looking in other places). Let’s look at other providers, shall we?

Cloudy VPSes

So named because they are charge-per-use VPSes, but all the ancillary stuff is in the price like traditional VPSes.

I haven’t done a really extensive search for these type of providers. The biggest known one is DigitalOcean, and I found Vultr on Twitter. And then I found Atlantic.Net somewhere else. (I think a Hacker News thread on DigitalOcean, amusingly.)

The biggest thing for me recently was the sudden addition of‘s lowest end system ( – $0.99/month for 256MB of RAM, 10GB of disk and 1TB of transfer is pretty much unheard of. Making the ultra low price point generally available is entirely new, and it’s going to be interesting to see if anyone in the cloudy space matches it – it’s actually cheaper than traditional VPS providers for a small always on box. It’s definitely on my list to watch (and possibly get for things like a target for automatic MySQL backups).

DigitalOcean would have my vote right off the bat for that sweet sweet $100 in credit they’re giving to students (, except for the fact it’s for new users only. Which is rather annoying.

All 3 providers appear to be about the same:

  • Vultr appears to be trying to undercut DigitalOcean in the low end – More RAM/less space/same price for the cheapest box, same RAM/less space/cheaper for the next level up.
  • Atlantic.Net also appears to trying to undercut DigitalOcean, just with more space and bandwidth instead of RAM. And literally cents cheaper! ($4.97 vs $5? Mhmm…)
  • Meanwhile, DigitalOcean coasts along on brand recognition and testimonials of friends.

So cloudy VPSes are far better for the single always-on server usecase. How about traditional VPSes?

 Traditional VPSes

Tradiational VPSes and I have an interesting history. Prior to finding LowEndBox, VPSes were super expensive compared to shared hosting, eg – I still have a client/friend hanging onto his $50/month 1and1 VPS that I cringe about.

That pretty much changed when I found LowEndBox. For the ultralow low pricepoint (like Atlantic.Net), LowEndBox has had $10/year offers in the past, primarily for systems with less RAM (but 256MB has happened –

Downside of traditional VPSes are that the better prices == prepay annually. No pay-as-you-go here.

My history:

  • for 2 Xen VPSes in 2012 – I think they are one of the few who are doing Xen as opposed to OpenVZ or even KVM
  • Hostigation in 2013 – for a personal project that never actually happened
  • WeLoveServers in September 2013 – basic server for $19/year
  • Crissic in early 2014 – Ostensibly better than WLS for $2/month that I hosted a client site on

Looking at the Virpus site, they’ve dropped their prices from $10/month to $5/month. I can’t remember what I paid for Hostigation, but I rarely even logged into that VPS.

WLS was at the very least decent. I used it mainly as an OpenVPN endpoint more than anything.

Crissic is good, but overly touchy on limits. I regularly get my SSH connection closed despite keepalives being set, and I can’t figure out why.

Overselling is a thing that happens though – CPU usage in particular is really limited, and suspensions happen. So I’m leery of moving my stuff to a traditional VPS provider without being able to test it. Especially if I just paid for a year of hosting.

New to me: RAM heavy VPSes

VPSDime ( is a new-to-me provider. 6GB of RAM for $7 a month. Unfortunately, offset by only 30GB of storage and 2TB of bandwidth vs WeLoveServers’ 6GB/80GB/10TB for $19/month. Then again, I don’t really use that much storage or bandwidth.

Bonus: Storage VPSes!

The main reason I kept DreamHost for 2 years after I got annoyed with them was that their AUP allowed me to use RAW images on my photography site, and finding space for 500GB of photos was not cheap anywhere else. With the introduction of their new DreamObjects service (read: S3 clone), that changed, and my last reason for keeping them disappeared.

VPSDime has interesting storage VPS offers. 500GB of disk for $7 is $2 more than Amazon Glacier (not accounting for the fact that I’m unlikely to get to use all 500GB), but offset that against the fact that it’s online (no retrieval times) and I don’t have to pay retrieval fees. Maybe I’ll get to ditch one of my external drives…

Runner up: BuyVM – more established, but about double the price for the same amount of space :(


I’m inclined to see what Vultr’s 1GB/$7 month plan is like to host my personal stuff. Also thinking of getting a 1GB WLS VPS to be an OpenVPN endpoint/development server. Possibly Atlantic.Net‘s cheapest service instead, but an extra 58 cents per month isn’t going to break the bank and I have the advantage of not needing to give Atlantic.Net my credit card details. In the worst case I can walk away from WLS without worrying about my credit card being charged.

Yay for backups and Ansible making it possible to deploy a new server in minutes once I get the environment setup just right.



Booting from SD Card on a X230

The SD Card slot is unfortunately on the PCI bus, so it doesn’t show up as a bootable device.


Have a /boot partition on an internal drive, point that at the SD card. Reclaimed ~900MB from Lenovo’s system restore partition to make a /boot partition. GRUB was added to the internal drive.

As suggested in, edit the initram image to get SD card support added. In Fedora, this was done using dracut’s kernel modules option in /etc/dracut.conf, so any kernel updates would automatically get the appropriate modules added.

Sticking point – Fedora 17 seemed to have the drivers installed natively. Fedora 20 didn’t, systemd would die.

Main issue was naively assuming that grub was using the correct initramfs that was built. I installed Fedora, then did a yum update kernel to generate the initramfs image. But that didn’t work, so I tried various other things, not realising that the bugged initramfs image was still there, and because it was newer, GRUB was automatically using that instead of the initramfs that I was testing.

Other issue was selinux – the context of files no longer matched (oddly enough). Not sure how/why this happened, but it ended up that I couldn’t log in – GDM login screen never came up, and logging into a console just dumped me right back to the blank console login. Had to boot into single user mode (append “single” to the kernel boot line in grub) and found out it was selinux issues when I got avc denial errors about /bin/sh not having permissions to execute. set enforce=0 fixed this.

cd /run/media/liveuser/ba7fca87-462e-4e8e-91bc-494f352f5293/
mount -o bind /dev dev
mount -o bind /sys sys
mount -o bind /proc proc
mount -o bind /tmp tmp
mount -o bind /run/media/liveuser/b887b758-1bed-48f9-9ee9-4c78af34b487 boot (the bootable drive)
vi /etc/dracut.conf
cd /boot
dracut initramfs-3.11.10-301.fc20.x86_64.img `uname -r` --force appears to suggest that I could boot grub on the internal drive and hand off control to the grub on the SD Card, which means that I can get SD Cards for different distros and not have to worry about keeping copies of the various vmlinuz/initramfses on the hard drive /boot. Which would be the ideal situation.

No Comments

TF2 on DigitalOcean

Or how I spent 3 cents on Digital Ocean to play MvM with my friends for 2 hours.

Maybe the MvM servers were having issues, but 4 different people trying to create a game didn’t work (or at least TF2 kept on saying ‘connection error’ – for everyone.

So I decided to try and spin up a server, like I used to do on EC2, except I decided to use the $10 of credit from Digital Ocean that I got when signing up, simply because Digital Ocean seemed a lot easier to use than Amazon’s Spot Instances.

Used Digital Ocean’s 1GB/1CPU node ($0.015/hour) in the Singapore location, no complaints about slowness/ping issues from the people in Singapore/Japan, but my ping in Ontario was ~330ms.

One line command, untested:

sudo yum -y install wget screen less vim && sudo service iptables stop && wget && tar -xvzf steamcmd_linux.tar.gz && mkdir tf2 && ./ +force_install_dir tf2 +login anonymous +app_update 232250 +quit && mkdir -p ~/.steam/sdk32 && cp linux32/ ~/.steam/sdk32/ && cd tf2 && echo "hostname famiry MvM
rcon_password tehfamiry" > tf/cfg/server.cfg && ./srcds_run -game tf -maxplayers 32 -console +map mvm_decoy

See also: – TF2 server config generator – list of what each config option does – Valve’s official guide on running TF2 servers. & – fancy systems for automating setup & running of TF2 servers

Why Digital Ocean: In terms of money, 3 cents a week isn’t going to kill me. But I still have some Amazon credit, so it’d be nice to use that up first.

Looking at the prices (as of Jul 20), Digital Ocean is definitely cheaper than Amazon – the cheapest instance that looks like it would work is the t2.small instance, and that’s 4 cents an hour. (I’m pretty sure a t2.micro instance won’t be good enough) More interestingly, a m3.medium instance is ~10 cents an hour.

The m3.medium instance is interesting because it has a spot instance option – and the price when I checked it was 1.01 cents/hour. However, I’m pretty sure the OS install + the TF2 files would be larger than the 4GB of storage assigned to the instance, so I’d also need an additional EBS volume, say ~5GB. Those are $0.12/GB/month, so for 5GB for ~2 hours, so the cost would be pretty much negligible.

However, there is one final cost: data transfer out. Above 1GB, Amazon will charge 19 cents for a partial GB. Assuming I play 4 weekends a month, I’m pretty sure the server will send more than 1GB of data (exceeding the free data transfer tier), so I’d be charged the 19 cents. Averaging this out over 4 weekends, I’d get charged ~5 cents a weekend. Thus, even with the actual compute cost being lower, I’d still get charged more than double on EC2 than Digital Ocean.

But this is still only 7 cents a weekend.

The $10 of credit from Digital Ocean will last me approximately 6 years of weekend playing, assuming no price changes. I have ~$15 of Amazon credit, so it looks like I’ll get 10 years of TF2 playing in – and I’m pretty sure we’ll have moved onto a new game long before then.

, ,

1 Comment