June 10th 2014 Super Smash Brothers Character Roster
WHERE IN FUCKING FUCK IS JIGGLYPUFF I WILL END YOU NINTENDO.
Snake or riot!
I’ve been meaning to post something about The Big Bang Theory for a while now but it’s taken me ‘till now to really understand what it is about the show that makes me uncomfortable. I’m not exactly a believer in the whole “only write about the things you like, don’t trash the things you don’t” trend which seems to be plaguing comments sections in negative articles lately, but I wanted to be able to really examine why I don’t like TBBT rather than just slagging it off. My main questions being - Why don’t I like this anymore? Why do I feel uncomfortable watching it? And why do I get so annoyed when I see people sing its praises online? The thing which really sparked this post was seeing a raft of comments on Facebook, below the last round of voting in Television Without Pity’s Tubey Awards, claiming The Big Bang Theory to be “the best comedy on TV”. This made me angry so instead of posting an impulsive comment calling out their bad taste which I’d probably regret later, I decided to really analyse why seeing comments like that made me so mad when previously, although I didn’t really love the show, I’d never considered myself as disliking The Big Bang Theory.
Hell, I even have season one on dvd, it’s sitting right between Battlestar Galactica and Bored To Death in my alphabetised collection.
And here, I think, is where my problem with The Big Bang Theory lies…
The Final Four
I will miss this app. Just so much.
Unironically, unsnarkily, unpunditly. I will really, really miss Google Reader.
Truly outstanding app.
“ At its most crass level, an email sabbatical is when you make all of your email bounce. But you can’t simply turn off your email without pissing off countless people in your life. Thus, an email sabbatical is actually a series of steps to let you step away from your inbox guilt-free and return to an empty inbox upon your return. ”
“ Recognize that the very molecules that make up your body, the atoms that construct the molecules, are traceable to the crucibles that were once the centers of high mass stars that exploded their chemically rich guts into the galaxy, enriching pristine gas clouds with the chemistry of life. So that we are all connected to each other biologically, to the earth chemically and to the rest of the universe atomically. That’s kinda cool! That makes me smile and I actually feel quite large at the end of that. It’s not that we are better than the universe, we are part of the universe. We are in the universe and the universe is in us. ”
Join Strax and the gang for a seasonal Sontaran sing-song!
After seeing Nicholas Piël benchmark a bunch of Python web servers, I was just itching to try some different configurations. So, I thought I would try to copy his autobench setup to do some testing of my own.
Amazon EC2 and Ubuntu to the rescue!
The key was that I wanted to be able to launch several instances at once and only have to connect to one of them to control them all. I thought I would have to build a custom AMI because I only wanted to do the custom configuration once.
Turns out, I was wrong. Ubuntu provides ready-to-go images that can be instantiated with a custom script.
So, the first thing to do is pick the AMI you want to use from the link above. I went with us-east-1, 32bit, and EBS root so that I could use micro instances. You can choose instance root if you want to use small instances.
Next, make sure you have a security group (I created a new one called Autobench) that permits both SSH and TCP port 4600. You can do all this from the AWS Management Console.
Next, launch several instances (4 is a nice number) of the AMI you chose before, but when it asks you for User Data, paste this in there:#!/bin/bash apt-get update apt-get -y install checkinstall # Enables the server to open LOTS of concurrent connections. printf %s "\ fs.file-max = 128000 net.core.netdev_max_backlog = 2500 net.core.somaxconn = 250000 net.ipv4.ip_local_port_range = 10152 65535 net.ipv4.tcp_keepalive_time = 300 net.ipv4.tcp_max_syn_backlog = 2500 " >> /etc/sysctl.conf sysctl -p # Increase the limit on file descriptors. printf %s "\ * - nofile 65535 " >> /etc/security/limits.conf # Bypass the static compiled file limit in the debian httperf package. sed -i 's/\(__FD_SETSIZE[ \t]\+\)[0-9]\+/\165535/g' /usr/include/bits/typesizes.h # Download, build, and install httperf. # Checkinstall creates a deb package to meet autobench dependency. mkdir -p /usr/src cd /usr/src wget ftp://ftp.hpl.hp.com/pub/httperf/httperf-0.9.0.tar.gz tar xvzf httperf-0.9.0.tar.gz cd httperf-0.9.0 ./configure && make checkinstall --pkgname="httperf" --pkgversion=0.9.0 --pkgrelease=99 --maintainer="firstname.lastname@example.org" --provides="httperf" --strip=yes --stripso=yes --backup=no -y # Download and install autobench. cd /usr/src wget http://www.xenoclast.org/autobench/downloads/debian/autobench_2.1.2_i386.deb dpkg -i autobench_2.1.2_i386.deb # autobenchd upstart script. printf %s "\ description \"autobench\" start on runlevel  stop on runlevel [!2345] respawn exec /usr/bin/autobenchd " > /etc/init/autobench.conf start autobench # Default autobench_admin settings. printf %s "\ host1 = testhost1 host2 = testhost2 uri1 = / uri2 = / port1 = 80 port2 = 80 low_rate = 500 high_rate = 4700 rate_step = 100 num_conn = 400 num_call = 1 timeout = 5 output_fmt = tsv httperf_hog = NULL httperf_send-buffer = 4096 httperf_recv-buffer = 16384 clients = localhost:4600 " > /home/ubuntu/.autobench.conf chown ubuntu:ubuntu /home/ubuntu/.autobench.conf # Optional custom hosts entries. printf %s "\ 10.1.2.3 example.com www.example.com " >> /etc/hosts
Make sure you choose the right security group, then launch. Now, be warned that it can take a good 5-10 minutes for everything to start up and be ready to go.
If you take a look at the script you’ll see that it automatically sets up all the customizations that Nicholas had in his post. Feel free to tweak the script for your own purposes.
Finally, make note of the IP or hostnames of your new instances and ssh to any one of them (email@example.com). Assuming you launched them all at once, you can use internal IPs. Then use something like the following to run a benchmark:$ autobench_admin --clients localhost:4600,10.1.2.5:4600,10.1.2.6:4600,10.1.2.7:4600 --file bench.tsv --single_host --host1 www.example.com --uri1 /
Note that the default autobench_admin settings we specified in the script will be divided by the number of instances you have. In particular the number of connections. That’s why I’ve been going with 4 instances. When I tried 3, httperf started throwing errors saying that 133.33333 was an invalid number of connections. So either tweak the number of connections to be evenly divisible by your number of instances, or choose a nice round number of instances.
Well, that’s it for now. Hopefully this will help some of you do some benchmarks of your own. I’ve already been comparing uwsgi to gunicorn+gevent. :)
Please comment if you have any suggestions or tweaks or anything. And let me know if you do any cool benchmarks using this.
Reblogging as text because dbinit’s tumblr is really slow, but the original can be found here.
I’ve been working a similar setup and would have found this very useful about a week ago.
This is why I love programming and think that it’s really important for others to learn at least the bare minimum too.