Limitations to using a Raspberry Pi as your Super Fast Broadband router

I’ve seen a few recommendations about using a Raspberry Pi as an Internet Router. Last night I did some experimentation and found that unfortunately it’s not quite up to the task on my super-fast link (50Mbit/s, “soon” to be upgraded to 100Mbit/s).

I’m going to experiment with some other options and see if I can find a configuration that works – if I do, I’ll post an update. At this stage, though, I’m pretty doubtful it’ll work for me.

Testing Environment

  • Raspberry Pi revision 2 board.
  • Standard current Raspian release (2013-02-09-wheezy). I kept the device in standard configuration (no adjustments to ethernet ‘Turbo’ mode, or to memory allocation).
  • Edimax EU-4207 USB ethernet adapter.
  • Iperf – the default version on all boxes.
  • Two very capable boxes – able of sustaining full gig-ethernet iperf performance between them with a crossover. Both ran Ubuntu – one Quantal, and one Raring.
  • Direct CAT 5E ethernet links from the capable boxes into the two ethernets on the Raspberry Pi. All links (both directions) showed 100Mbit/s full duplex connections on both sets of interfaces.
  • Static IP addresses all round, with the two interfaces on the Raspberry Pi acting as a gateway.
  • IP forwarding on the Raspberry Pi enabled, but with no firewalls or NAT configured, for maximum performance.

Machine A to Raspberry Pi

This test comprised running iperf from machine A to the Raspberry Pi using the Pi’s built-in ethernet port. Iperf performance seemed fine – topping out at about 94Mbit/s. Unfortunately there was pretty much 0% cpu time available, with almost everything being in system time. The CPU was mostly keeping up, but there was very little wiggle room. I was pretty impressed with that, overall.

Machine B to Raspberry Pi

This used the EU-4207 ethernet adapter. Performance was marginally up from the native port. Iperf performance seemed fine again, topping out at about 96Mbit/s. Again, cpu was pretty much 100% by system time, servicing the packets coming in.

Machine A to Machine B

This is the real test, and unfortunately the Pi fails pretty badly here for my usage. Performance varied from about 35Mbit/s to peaks at around 55Mbit/s, and was quite variable.

The vmstat output is more telling – what’s interesting here is that although the test ran for 10 seconds, ‘vmstat 1′ only shows 4 lines of output (there should be 10). The cpu is so badly lagged that vmstat goes on the back burner.

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 259932 18004 132188 0 0 35 21 1068 99 3 8 89 1
7 0 0 258716 18004 132188 0 0 0 0 5348 29 0 33 67 0 -- performance test starts here
7 0 0 256892 18004 132188 0 0 0 0 24359 17 0 100 0 0
7 0 0 255548 18004 132188 0 0 0 0 17461 8 0 100 0 0
3 0 0 256412 18004 132188 0 0 0 0 88444 77 0 100 0 0 -- note that vmstat outputs are missing
0 0 0 259420 18004 132188 0 0 0 0 1346 46 0 4 96 0
0 0 0 259420 18004 132188 0 0 0 0 348 60 0 0 100 0
0 0 0 259420 18004 132188 0 0 0 0 348 65 0 0 100 0
1 0 0 259420 18004 132188 0 0 0 0 348 61 0 1 99 0
0 0 0 259420 18004 132188 0 0 0 0 355 70 0 3 97 0
0 0 0 259420 18004 132188 0 0 0 0 350 65 0 0 100 0
0 0 0 259420 18004 132188 0 0 0 0 346 65 0 0 100 0

Many links are indeed slower than this speed, and the Pi is probably fine for those purposes (just don’t any run other apps on the Pi, otherwise you’ll end up with random slowdowns). The problem is that as soon as you start adding the overhead of NAT, PPOE, and stateful packet filtering into the mix, performance is definitely only going to get worse. And that’ll lead to packet loss and reduced browsing speed.

Remember also that this is a unidirectional iperf test. So if you’re uploading and downloading at the same time (hint, you are!) things will get even worse.

I’d say that if your link is up to 20Mbit/s and you have simple home-type network with limited NAT requirements you’re probably fine with the Pi as a router. Above that, I’d start to be wary.

Optimising things

There’s always some room for optimising Linux’s networking, and I’ll have a look around to see if there’s some set of magic sysctls that’ll help here. I’ll post an update if I find anything. If you have any suggestions you’d like me to test, tweet me at @oskarpearson

Perl Crypt::PBKDF2 library with minimal dependencies

I’ve published a partially-compatible Crypt::PBKDF2 library with minimal dependencies on GitHub

Review would be both welcomed and appreciated.

Note that you should probably rather use the original library instead, unless you are running a very old Linux distribution. The official Crypt::PBKDF2 library requires Perl Moose, which isn’t available on old operating systems, which is the reason I’ve had to put this together. Sometimes you can’t just upgrade everything to the latest OS – it’s more complicated than that.

Been working on a facelift for the London Hackspace website

Before

London Hackspace Website - Before
London Hackspace Website – Before

After

London Hackspace Website - After
London Hackspace Website – After

Background

As you may know, I’m a member of the London Hackspace, a community-run Hacker Space in Hoxton. As a member, you get to play with pretty much everything: programming, Arduinos, electronics, and even Liquid Nitrogen (assuming the pledge goes well!).

I’ve taken some time to re-organise the homepage, based on an original design concept first discussed in 2011, using 960.gs and html5. I hope you like it! More importantly, I hope it helps build momentum at the space, and that they might get new members out of it.

I’m using the following historical image as a guideline – https://dl.dropbox.com/u/1909920/lhs-website-sketch-2011-01-08.png That sketch is something Russ’ friend put together, and Russ posted to the infrastructure group on 24th Jan 2011 (!) https://groups.google.com/forum/#!topic/london-hack-space-infrastructure/nUlZC7l3PmE shows the original emails.

Here’s the github link for the updates, assuming you’re that interested.

Thought of the day from Linus Torvalds back in 1999

This quote by Linus Torvalds has stuck with me since I first read it back in 1999:

I generally have a problem with your patches: a lot of them are really
good. But at the same time a lot of them are changing things because
you’re in “random walk” mode, and you just walk over the code without
really thinking about them. In the end, you then have a huge patch, and
nobody really knows which part of your patch is really the worthwhile
part. Which is then why it takes so long for some of your _real_ fixes to
get into a stable kernel..

It’s not the only occurrence of him talking about software developers using the ‘random walk’ methodology for code improvement – sometimes he is occasionally less polite.

As a developer, if you notice something else is broken or needs improvement, fix each problem in a separate and logically internally-consistent change that makes sense to the reader and the end user. Doing things this way means less cognitive load for everyone involved, as they can weigh up each part of your change with a consistent end-goal in mind, and build a clear mental model of your change without irrelevant detours.

This method also nicely matches one of the more general rules of debugging: change one thing at a time. If you change 10 different and unrelated things at once and the system breaks, it’s often much more difficult to determine why the problem occurred.

Following this method also means that it’s much easier to revert a single change that’s gone wrong, without the side effect of bug-free changes disappearing, simply because they were included in the same patch set.

So: fix one thing at a time, and fix it right.

Punishing poor security by copyrighting common unsalted SHA-3 passwords

Geeky fun idea of the day – I want to pre-calculate and copyright all un-salted hashes of common words the moment the SHA-3 algorithm is approved.

As some of you may know, there are plans afoot for creating a new SHA-3 one-way-hash algorithm.

One-way hashes are often used in cryptography/security – the idea is that you can take some data (like a password, or an entire book/dvd/cd) and run it through an algorithm which returns you a single number that uniquely identifies the input data. However, given the number alone, you can’t get back to the data (hence it’s “one way”). The returned number returned is indeed huge, but it is easily represented as a simple text string of a consistent length.

Many people use hash algorithms for storing people’s passwords in databases, so that they can avoid having a massive list of user’s passwords in easily-stealable form. The idea is that instead of storing the password in the database directly, you store an encrypted (hashed) version of it. When someone tries to authenticate themselves, the code takes the password the user supplied and re-hashes it. The code then compares the new hash against the database hash – if they are the same, the person entered the correct password.

This is all fine and dandy, except when the supplied password is a common word. If you use the word “fred” as your password, the hash algorithm will always return 570a90bfbf8c7eab5dc5d4e26832d5b1. So, if I find an encrypted password entry in a database which is 570a90bfbf8c7eab5dc5d4e26832d5b1, then I know the input value (their password) is “fred”.

The OED has, according to Wikipedia, 500,000 words. So it’s entirely possible to take every english word and hash it. Voila – you have now figured out a huge number of passwords in the database.

So what I want to do is pre-calculate (and copyright) all the hashed words in SHA-3. If someone stores them in a database, they are effectively using my copyrighted information.

The only problem is that it it’s not possible to copyright a numberor is it? (oh, and the fact that it’s only theoretically possible to copyright longer texts…)

There’s a technique called salting, which helps avoid the problem described above. In short, a random string is added to the supplied password – so now instead of having to calculate the password for “fred”, I have to calculate every possible combination of “fred” and all possible random strings. This is effectively equivalent to pre-calculating AAAAfred, AAABfred, AAACfred, etc, all the way to ZZZZfred. By chosing enough “salt” characters, I can make pre-storing all possible passwords / salt combinations impossible.

Oh, if you’re interested – check out this page for more info on storing passwords in your database. In short, use bcrypt – more info here

Alphanumeric passwords with enforced numbers – more “Security Theatre”

Reading through TechCrunch’s Depressing Analysis Of RockYou Hacked Passwords:

According to a study by Imperva, [the most common password is] “123456,” followed by “12345,” “123456789″ and “Password,” in that order. “iloveyou” came in at no. 5.

I generate my passwords with APG, which generates passwords like this:

  • Irikyak6
  • RaypHiam6
  • radsErn2
  • reebrIjLi

As you can tell, these are for all intents and purposes, secure. However, some sites out there insist that the last one on the list is insecure. Why? It doesn’t have a number in it, so it must be terrible. 123456? That’s cool with them, though.

So riddle me this:

I’ve got a one character password. The password has to contain a digit.

What are the chances of guessing my password? Why, 1 in 10. I can’t use A-z, given the rules, so the password has to be a single digit, 0-9.

However:

I’ve got a 1 character alphanumeric password. There are no rules about how many numbers the password must contain.

What are the chances of guessing my password now? Why, A-Z (1 in 26), a-z (another 1 in 26), and 0-9 (1 in 10): So your chances are 1 in 62.

The same thing applies in longer passwords: the more information you give an attacker about the rules associated with a password, the less work they need to do to crack the password.

A better solution? Integrate a library like cracklib and set some reasonable rules about what passwords are, and aren’t allowed. It’ll stop 123456 in it’s tracks.

Oh, and for goodness sake, encrypt your passwords when you store them in the database.

Shared Vocabulary, Problem Solving, and Domain Driven Design

In The Science of Screwing Up, Wired Magazine discusses Kevin Dunbar: “a researcher who studies how scientists study things — how they fail and succeed”.

When Dunbar reviewed the transcripts of [a meeting involving people from numerous disciplines], he found that the intellectual mix generated a distinct type of interaction in which the scientists were forced to rely on metaphors and analogies to express themselves. (That’s because, unlike [his comparison group of specialists,] the E. coli group, the second lab lacked a specialized language that everyone could understand.) These abstractions proved essential for problem-solving, as they encouraged the scientists to reconsider their assumptions. Having to explain the problem to someone else forced them to think, if only for a moment, like an intellectual on the margins, filled with self-skepticism.

Interestingly, the Domain Driven Design book makes a similar plea:

When domain experts use this LANGUAGE in discussions with developers or among themselves, they quickly discover areas where the model is inadequate for their needs or seems wrong to them. The domain experts (with the help of the developers) will also find areas where the precision of the model-based language exposes contradictions or vagueness in their thinking.

However, I find the essence of the two discussions to be slightly different:

  • The Domain Driven Design book encourages developers and architects to move towards one language, that can be shared between the stakeholders.
  • Dunbar found that the process of “working around” the differences in the languages and concepts produced the real results, of finding a common understanding.
  • I think the correct solution lies somewhere between these two extremes.

    Written on the coach – where we are and how we are doing

    So, I’m currently on the coach on the way London to Oxford. The bus has wifi access, so I decided to catch up on my blogging a bit.

    We’ve been staying at my father’s house in Wolvercote. Technically it’s not “Oxford” itself, but for all intents and purposes it really is.

    I’ve been working from home for Qualica. Siobhan’s looking for work, though the stress of the move caught up with her and wiped her out with a nice cough and flu. Thankfully, she’s been feeling better recently, and is again knocking on doors.

    Otherwise, we are doing really well. We’re really happy here, and are definitely both happy we made the move. Things will, I am sure, get tough at some stage. But in the meantime we’re just enjoying it!

    (I only posted this now, even though I actually wrote it on the bus yesterday.)

    We’re on our way!

    We’re on our way to the UK! Siobhan and I are sitting in the very Dune-esque Abu Dhabi Airport.

    Unfortunately, I didn’t manage to photograph the airport on the way in, but you can see a picture here (Thanks to JohnnyJet.com for the picture)

    I find the airport a bit claustraphobic – the main room is quite small, packed, and the ceiling is just a bit too low.

    The final packing process was extremely unpleasant – too much stuff.

    Excess baggage on Etihad Airways was simply exorbitant. Etihad wanted to charge +- R390 per kilogram of excess baggage. So a suitcase and an extra box would have cost us an additional R6,500.

    For comparison, I could get a family member to fly out, bringing the luggage with them, for R7,178.00.

    R3,478.00 of that amount would be for taxes.

    Documents you should have for emigration

    Through the process of emigrating, I’ve found there’s a huge amount of paperwork that people need. Some of these documents take 8 months to get from Home Affairs.

    If you have a sudden compelling opportunity, the wait for paperwork will likely kill all hope. If you want the documents quickly, the only way is to hire a specialist who pushes things through the system (delaying everyone else, no doubt). This can cost thousands of Rand.

    So, make sure that you have the following, as soon as possible. Go in to Home Affairs (early morning is best), and save yourself months of waiting:

    1. A valid passport.

      SA Passports are generally valid for 10 years – you should always have lots of validity time available. Some countries don’t let you visit unless your passport is valid for at least 6 months. So renew well in advance of the expiry date. Many countries don’t accept temporary passports either.

      Get a Maxi passport if you’re likely to travel a lot in the next 10 years.

      Applying for a new passport invalidates your old passport! Make sure you aren’t going to travel while you wait for your new passport to arrive.

    2. Your old passports.

      Don’t destroy these, and don’t hand them in at home affairs when getting a new passport. In the UK immigration process, they want to know all visits to the UK, and they want your old passports included in the application if possible.
    3. Your Birth certificates

      There are three types of birth certificates. Various immigration agencies require different ones. I strongly suggest you get all three and store them in a safe place. The UK requires a “vault copy”. Canada requires an “Unabridged certificate”. If you get all three ahead of time, you’re all set.

      If you apply for all of these well in advance, you’ll pay less for all three than if you get one through a consultant in a hurry.

      1. Abridged birth certificates are the “standard” birth certificates that SA uses. They include only your details. And they are printed on yellow paper by a computer
      2. Unabridged birth certificates include your parent’s details – their ID numbers, full names. They are also printed on yellow paper by a computer.
      3. Vault copy birth certificates are certified copies of the original paperwork submitted to register your birth. It’s hand written, and has a certification stamp. KEEP THIS SAFE as some sites indicate that you can only request it once in your lifetime. I’ve no idea if this true.
    4. Your parent’s and grandparent’s birth certificates are very useful if you’re going to need to prove ancestry. For the UK, FreeBMD will probably be useful, as they store free copies of the births, deaths, and marriages registry information for the UK.
    5. Marriage Certificates for you, your parents, and your grandparents may also be important.
    6. Bank Statements are very important. Especially if they include your full street address, and preferably a massively positive balance. See the visa rules for the requirements relating to bank balances – it varies a lot across countries.
    7. A Drivers Licence that isn’t about to expire is a good idea. If you are overseas when it expires, this can save you hassles.