Two days after the big quake, I am in awe of Japanese civil engineering.

I'm considerably less impressed by the quality of the press coverage.

The best source for accurate news has been (again) wikipedia's pages:

I was going to go into a big description of the nuclear crises in this post but the following two articles describe what seems to have really happened, really well:

Aside from these pieces, the nuclear-related news quality has been abysmal, and easy questions left un-asked or unexplained.

The reporters involved glom onto a big number, filter it through their preconceptions and re-report it blindly, and then go back to reporting on fashion and sports, or whatever they do normally.

Take the widely reported radiation at 1000 times normal factoid. Please. Read this, about the Incalculable danger. Get a little educated about short term radiological byproducts. Pass that piece along to an irresponsible reporter or two. You'll sleep better.

While the hydrogen explosion was worrying, it takes time to cool a reactor down, and at least one will be ruined permanently by seawater, I think that the imminent crisis has been averted, by amazing engineering, planning and forethought, AND action. Not that it helps calm the pansies of the press.

To me the big question is: How THE HELL - in 1967 - do you design something so good, that: after lasting 10 years beyond your intended design life - can also withstand a quake 7x the size of what you designed it for? Followed by a tsunami? Whose hand can I shake? Who gets a medal? Are any of those engineers from the 60s still alive? I mean, WOW. They did that design, at least in part, with slide rules.

STILL, to me, the nuclear news is dwarfed by this:

In Japan, after the biggest quake they've ever had: Millions of buildings still stand. Most of the reactors still work. Not a single tunnel collapsed. The bridges out seem to be from tsunami damage. Of the five missing trains, all 5 reported in, and of the derailed train, all passengers were apparently rescued.

In fact, most normal train service resumed the next day.

F-ing heroic civil engineering all across the board, if you ask me.

The principal damage was to seacoast towns with geography that looks like tsunami funnels. And even then, there was sufficient warning for many people to get clear.

Via wikipedia:

    "One minute prior to the effects of the earthquake being felt in Tokyo, the Earthquake Early Warning system connected to about 1,000 seismometers in Japan sent out warnings on television of an impending earthquake to millions. This was possible because the damaging seismic S-waves, traveling at 4 kilometers per second, took about 90 seconds to travel the 373 km to Tokyo. The early warning is believed by the Japan Meteorological Agency to have saved many lives..."

I wonder how fast this news made it onto the cell phone network? or IRC?

Regardless, a lot of people, from 14:46 JST (quake) to 15:55 JST (tsunami), had time to get clear, and while the imagery being shown around the world is of those seatowns washed away, this disaster, without all the heroic civil engineering and planning done beforehand - could have been much, much worse.

Contrast this quake with the much milder (7.9) Great Kanto Earthquake of 1923. According to Britannica

    The death toll from this shock was estimated at more than 140,000. More than half of the brick buildings and one-tenth of the reinforced concrete structures collapsed. Many hundreds of thousands of houses were either shaken down or burned.

Consider also how much we've learned since the 2004 quake in Sumatra, which killed over 230,000 people.

While 4.4 million households in northeastern Japan are without electricity, and Japan faces a near-term shortage of such...

Mere thousands, not hundreds of thousands, died.

The engineers of Japan have proven you can build a complex, high energy civilization that could take the worst Mother Nature could dish out, and survive, with style.

And I'm sure they'll take the lessons learned from this disaster, and build something even better.

Bonus link: (which makes the above points even better than I could)

Sunday, March 13, 2011 Tags:

About 5 years ago I stopped listening to internet radio. If I (or someone else) was also using the Net connection, the audio dropouts per minute were so annoying that I shifted to purely downloadable media such as podcasts. I started using streamripper on the radio stations I liked, and downloading those streams, at night, when I wasn't otherwise using the network. I would dump the files on my laptop or mp3 player, and lo and behold I had a steady supply of new music and news to listen to, that wasn't annoying to listen to.

This shift in my behavior meant that I had to give up live and random internet radio entirely. And I did. Listening to internet radio had become intolerable.

I didn't understand why it was happening at the time. I had tons and tons of bandwidth. I had the most advanced OS in the world. I was working on bleeding edge hardware. WTF? I surmised that the Internet was going to hell, and moved on. Later on I observed people with Mac and Vista hardware also driven mad by their inability to websurf and listen to Internet radio. They had a choice of one or the other.

Recently I added a NAS/gw to my network. It is both my internet gateway and a 3.5TB file server. As it was on 24/7 and using only 17 watts I thought it was a good place to move my radio storage to rather than my laptop, so I could save energy.

Imagine my surprise when I tried to stream radio from that - across two wireless cards, to my laptop - AND do other stuff - only to have THAT audio start breaking up. My internal network runs at 1000Mbit/sec. My radios are 802.11n - I typically get 70Mbit/sec. Surely a little .128Mbit tcp stream should co-exist with all other streams on the network? With just one, at least?

Nope. I ran a load generating test tool (iperf) while running an audio stream via samba (cifs - it's an old fashioned filesharing protocol, not web based, you may have heard of it), with the current Linux defaults throughout the network, and hit dropout city.

Now, maybe, I have a handle on the real problem; a theory. It could be bufferbloat! The huge amounts of buffering in my (linux and mac devices) is interfering with modern TCP's congestion control servo mechanisms, ramping up too much while bloated buffers filled.

Did bufferbloat kill my internet radio? You would think that two tcp streams - one low volume - 128kB/sec - and another - running as fast as it could - 70Mbit - could co-exist, wouldn't you?

They don't. TCP/ip is supposed to be fair. It doesn't appear to be. I knew life was unfair, but TCP is supposed to be!

So, after reducing my txqueuelens to a small number (16, down from a 1000), and my dma tx buffers as low as they can go (64, down from 256), one of my music players (aqualung) started working over my internal wireless network... under a saturating load generated by iperf - perfectly! Another (mplayer) still doesn't. Apparently aqualung is doing deeper buffering of it's own that's larger than the current (much smaller) period of the bufferbloat induced jitter. How much, I don't know.

I need to procede cautiously, remembering rfc968, and retest a few assumptions from the above, but:

My hypothesis is: 64 DMA tx buffers is STILL too much. Shrinking that's going to require some hacking, but thankfully I have source code to all the devices in the loop and I figure it'll only take me a couple days to see if I can bring that down to, say, 4, throughout.

There's numerous other hypotheses I can try on top of this - various forms of traffic shaping - exploring the buffering mechanisms mplayer and aqualung use - exploring samba's internals - looking at packet drops and retransmits - exploring what SACK does - trying out ECN - experimenting with radio speeds - in a nice, simple test environment.

All of these tests are aimed at solving a real problem:

Getting my beloved internet radio back.

IF my real problem with streaming radio has been, specifically, wireless bufferbloat all along I'll be a really, really, really happy guy. Getting a solution worldwide may require an effort on the scale of Y2K - but at least - maybe - hopefully - having this diagnosis will lead to a cure, and the internet's potential for even more interactive applications like voip and gaming, fulfilled.

Update: Wireless bufferbloat is proving far worse than expected. And my initial attempts at implementing all the above seemed to improve streaming over CIFS, but did not fix it entirely. From looking at the TCP traces, aqualung does a LOT of buffering, mplayer does nearly none, and so we end up with requests absurdly delayed by my IWL driver 130ms or more, further delayed by other buffers I have no control over, and streaming fails. I have fixes for the IWL and AR71xx but have not had time to test them, as I've been too busy helping get up and running.

If you think you are suffering from bufferbloat, here are some simpler tests than mine you can perform.

Thursday, January 13, 2011 Tags:

I was a little miffed by Joel Spolsky's overly positive comparison of stack overflow to a USENET FAQ, so I posted the following over there, yesterday. I'm still awaiting moderation... perhaps I caused a stack underflow, over there?

I do think my little demo and criticisms as to the size and shape of their actual elephant is quite revealing:

To Joel Spolsky (who I otherwise deeply respect)

I spent the afternoon prototyping a gnugol engine that could parse the json output that provides into a format that I can deal with, in Emacs. I got it sort of working a few minutes ago.

gnugol screenshot

You’ll note that my little tool (an afternoon's work) – shows 10 results in 710×289 pixels in a very USENET FAQ-like format. I wanted to have some visual indicator of scoring along with the links back to your site, but didn't get that working in time for this posting.

This search takes up roughly 1/6 of my available screen area on my laptop.

For comparison, here is a link to a full size screenshot of your site, stackoverflow for something approximating the same search, with my tool overlaid on top, for comparison.

This screenshot of Stackoverflow shows only 2 results, while using all of my available screen area, in other words, 6x the space for 1/5th the answers!

When I’m working, and being productive, my screen is usually covered with non-web windows and looks something like this.

I hope you’ll understand why, now, I find it difficult to participate in the modern internet to the extent that I did in the USENET days. I find interacting on most forums ducedly timewasting and inconvenient, simply because they insist on eating your entire screen to use them. USENET at least had kill files and scoring – built in and invisible rather than up front, on every page.

Some asides:

  • The stackoverflow json api is well documented. Thank you. (good)

  • There seems to be no way to go from a question to an answer in a single query. It would be nice if the API would supply both the text of the question and the recommended answer in one go. (bad)

  • The search titles interface to the json api takes 7-10 seconds to complete. Searching google with a restriction takes 384ms and yields a screen-full of more relevant results. (bad)

  • Your search (not json) engine interface uses an opensearch API (good, but not json)

  • The fastest I’ve seen a result with that API took 709ms. (bad)

(I am based in colorado if that helps your speed of light calculation)

  • Searching titles with json or using your web site's is nowhere near as effective as searching google with a site restriction of your site.

It’s the EXTRA distance stackoverflow invokes between a question and an answer that’s the problem here.

It might help (others) somewhat if the question(s) and (best) answers were actually on the same page on the search results, perhaps folded, using javascript.

  • from reading the api, I don’t see support for vote up/down built in. (then again, my eyes hath glazed over) I do like the idea of being able to provide useful feedback in my environment.

  • Stackoverflow UI could take better advantage of the wider screens now available as there is a lot of whitespace in the second screenshot. (bad)

  • I would like it very much if there was something like a "short-stack-overflow" so someone could participate and use the system in a window size that didn't actually interfere with actual work.

Please note that I would not make these comments unless I actually liked using stack overflow. I do, I just can’t stand the UI.

Old school FAQs and USENET styles still have their place, and I would like it if it were possible to meet somewhere closer to the middle between these two extremes.

Thanks for listening.

Wednesday, January 5, 2011 Tags:

I have a tendency to explore a cool new idea, write some code to test it out, hit a snag, then shelve the project.

Or I'll write a hack to solve a specific problem, get it working good enough for me, but not be proud enough of the qualty to go through the trouble of releasing it.

In both of these cases the code would bitrot, and I'd eventually lose it, only to have to rewrite it from scratch when I needed it again.

Also, in the past, I was often under NDA restrictions that made releasing any code require legal review.

Fourthly, managing small bits of code under a good source code control system used to require a lot of setup.

These latter two problems aren't a problem anymore with the advent of social coding.

My (early) New Year's resolution: REFORM! Release more code more often. In that spirit I've taken the time to put 2 of the coding projects I've been working on "out there", on my web servers, and in a git source code repository in the cloud that anyone can access.


The project with the most potential is gnugol. Gnugol is a command line web search client with an interface to emacs. Unlike surfraw (which is excellent, btw, and their web site is hysterically funny!), it outputs the search results in the format I'm working in - org-mode primarily, but also wiki and plain text. It saves a mental context switch.

Gnugol is also very fast, as it's written in C. I like, and use it, a lot. I wish it did more stuff.


It has always bothered me that email's ancient RFC2487, published in 1999, basically prohibited "crypto-by-default", specifying a 530 5.7.1 "reject" message rather than "try again later, with crypto enabled". My cryptolist postfix policy daemon (modified from greyfix) checks for STARTTLS being enabled, and has a modifiable greylisting policy for encrypted and unencrypted email transports. It is not perfect but I hope it will encourage the use of more cryptography for email exchange.

Neither project is "done", but both are usable as is, and in focusing myself on getting releases out, I've:

  • Improved my own workflow. I can now (almost!) commit/build/publish a project with a single keystroke.
  • Improved both bits of code to where I'm not too disgusted with my coding skilz
  • Fixed a few bugs and added some features I've always wanted
  • Got a roadmap for both projects
  • Written/corrected a ton of documentation
  • Maybe helped out some people that have wanted something like these

Marvelously focusing is the idea of “doing a release by Christmas”!

I have numerous other half-baked projects I've been sorta-kinda-half maintaining for years, and I'm going to try to get them sorted out and “out there” so I shan't lose them ever again. But first, I want to finish these. Maybe there's potential contributors out there that can help tackle some of the bugs and missing features...

Monday, December 20, 2010 Tags:

Richard Stallman weighs in on the cloud and coins a new phrase: Careless Computing.

Spotted in the comments:

    “The trouble with lovely fluffly white clouds is, one day they will go dark, rumble a bit then break and rain a lot.”

In the light of the week's revelations of 1.5+ million logins/passwords stolen from gawker, (an extremely good analysis here in forbes) another 200,000 public records left in plain text on a public backup server for months without anyone noticing, McDonalds getting breached, and the University of Wisconson being exposed for over 2 years... It's time to rethink our attitudes towards data, ya think?

An answer to the privacy problem is to keep your own data inside your own computers and home, where there are clear boundries between your stuff and their stuff...

The bigger the cloud, and the more people that have access to it, the larger the security problem. A bigger cloud = a bigger target, and the higher price that one mistake that one person can make that everyone else has to pay.

    “I suppose many people will continue moving towards careless computing, because there's a sucker born every minute. The US government may try to encourage people to place their data where the US government can seize it without showing them a search warrant, rather than in their own property. However, as long as enough of us continue keeping our data under our own control, we can still do so. And we had better do so, or the option may disappear.” -RMS

In other news.. every major Australian editor-in-chief - save two - have called upon the Australian government to do their damndest to protect Assange. In the Washington Post (of all places!) - Eugene Robinson has essentially echoed my arguments regarding terms of service.

Tuesday, December 14, 2010 Tags:

Every new year, I try to reduce the number of inputs I get, cleaning out my mailboxes, putting everything unsorted into an archive folder, and so on. So for the past three days I've been getting off of mailing lists I don't read, writing email filters for stuff I do read, and trying to get the avalanche of input I cope with down to a manageable level again.

After being on the Internet almost continuously for 20+ years, I get a lot of spam. I mean, a LOT of spam. One account, that I retired 10 years ago and haven't used since, has got over 20,000 spams in 2 years. It averages about 3 spam attempts per minute actually, and about 70/day were getting through.

Worse, my main email accounts started to get really cluttered with spam early last year, and I didn't know why. After a while the problem grew annoying enough to where I moved back to the gmail accounts I use, which were doing a lot better job at filtering out the spam.

I HATE using gmail for email for a lot of reasons. I LIKE email fully integrated with my editor and like reading and sending email to be FAST...

So I decided to rethink my email setup. I'd put off greylisting for a long time - I disliked the idea of delaying an email for an hour, arbitrarily, just to deter "drive by" spam attacks. One of my favorite parts of email is getting email from strangers.

But after trying to figure out what had gone so wrong with my spamassassin config I got fed up, so I put in place a postfix policy daemon called greyfix on my main email server. It helped, quite a lot, but spam was still getting through on the secondary mx exchanger. Greylisting something twice is not a good idea (now there are TWO places where a given email will be arbitrarily delayed), but the only other choice was to go down to one email server and I wasn't going to do that, so....

Greyfix went on the secondary.


Wow. The silence of the spams. I have not seen ONE spam get through all day. The mail.log file that used to scroll by too rapidly to read now ambles by at a reasonable pace.

I also instituted a few other things that have (sigh) temporarily made it impossible for me to send mail to - amusingly enough - comcast, my local internet provider. It turns out that they (and a few others I've checked), only accept mail from domains inside their servers that are actually the domain name of the host. While a good security measure, it messes me up as mail from doesn't work, while does, even though both domains are me...

Moving along...

I was still getting about 4 spam attempts per minute from all over the world on the main server, and that was pretty annoying. It's a wee, little arm box, and doesn't LIKE being tapped on so often... So I said to heck with it. I created a whitelist of servers I exchanged mail with regularly - and blocked port 25 on any but that extremely small list of ips on the main server. I figure that - since greylisting works - I might as well force most connections out to my secondary server anyway.

In other words - my main email server has gone "dark" and provides NO opportunity for random spammers - or normal mail exchangers - to get in. This is kind of similar to how many (normal) sysadmins put a main, massively mail-filtering mail server on the outside (DMZ) and the "normal" mailserver inside the firewall, except in my case I'm still allowing some mail servers in and also sending mail directly from the internal server. This makes mail OUT a bit faster, and mail in - mildly slower.

This is a variant of nolisting.

Wow... That increased the amount of legitimate email I get on the main server to 100%. I didn't have to even look at the logfile anymore, except for errors.

Update: Dang it! Servers that exchange email with me that ALSO implement greylisting temporarily reject mail, too. My primary mailserver gives up after too short a while and forwards the mail to the secondary - which does the right thing and ultimately gets the mail through - and my primary server never gets greylisted by the other server! This means that ALL mails to a greylisting server I try to send from here are now actually getting through slower... I guess I have to think this part through more.

Flush with these successes, I said to myself, “Self, what REALLY bugs you about email that now that you are down and dirty inside your mail system and don't want to go here again for a couple more years?”

Well, it bugs me that email from my desktop to the server is covered over a secure connection, but interchange between providers - between you and me - is not.

It doesn't make much sense to me that 12+ years after STARTTLS was invented (and SSL security/certs came much earlier) and everybody sending mail to their email provider uses a secure channel, that nearly nobody - certainly not the big email services - uses cryptography between their services - even when offered!!??

We've all come to accept that email is basically insecure. It would certainly help the spam problem, however, if certificates were required. It would establish a higher bar as a line of defense... And people exchanging mail would get a slightly higher level of security overall.

I started digging into the relevant rfcs and found out why that few use STARTTLS for ALL email exchanges is basically an historical accident. It was too buggy in 1999 to use.

Not for the first time I wished Jon Postel had remained alive to guide email to a better landing...

I figure if enough people care, something will happen.

So I dug into that freshly installed greyfix code and figured out that newer versions of postfix actually supply whether the email exchange was encrypted or not.

So I hacked on greyfix for a while, to make it do “cryptolisting”.

What's Cryptolisting?

It's not that much different than normal greylisting, except that:

1) new people that send email via an encrypted channel get greylisted for a MUCH shorter time. I figure this is a good bet. Anybody that uses STARTTLS for email interchange has more of a clue.

2) I figure spam sent via encrypted channels is close to nil, so I can further whitelist my internal/main mail server's acceptable IP addresses with the database I get from the secondary mail server. This will speed up everybody that I get encrypted connections from - a positive feedback loop for people that bother implementing STARTTLS on their outside mail exchanger.

3) I put in a custom header field that I hope to convince gnus to inspect so I can colorize incoming email as to it's transport security on my end.

4) It puts in an informative delay message in the temporary reject, thus advertising it's own existence to other sysadmins and giving me room to write a manifesto about getting more crypto in our email... whenever I get around to it.

I've cleaned up the code a little, pushed it out to git hub, and created a web site for cryptolisting, so if you want to fool with it, go ahead.

Doing a secondary MX exchanger is a bit trickier than I'd like.

Further cleanup

Now I'm off to clear out my main mailboxes in time for the new year. 3000+ categorized emails left to go. I've got a whole bunch of other spam stoppers to put in place AND to figure out how to get mail to comcast again, but I'm really enjoying the silence of the spams... My mail is so quiet and so un-full of distractions...

EHLO? Is there anybody out there??

Monday, December 13, 2010 Tags:

The true law in the land now is the "End user services agreement", the EULA. Being accused of violating one is enough to see your service cut off.

Non-disclosure agreements and employment contracts are truly frightening things. Mortgage agreements are stacks of paper 6 inches high.

I try not to think about all the EULAs I've clicked on, without reading, in order to use a valuable service. Somewhere in the (must be hundreds, by now) of the end user agreements I've had to navigate, I've probably clicked away the rights to my first-born, and any unique idea I'll ever have, should it prove profitable, my right to my last name, and god knows what else.

This morning, I took time out to read the few agreements I'm knowingly subject to, that violating in any way would impact my ability to function in America.

I don't read patents, in the normal case, because of the triple damages rule, and because I find reading them very depressing. Every time I'm exposed to a patent, I'm non-productive for months.


I started my morning's reading with comcast. I wasn't aware of this clause, until this morning:

“10.2 Comcast may update the use policies from time to time, and such updates shall be deemed effective seven (7) days after the update is posted online, with or without actual notice to Customer. Accordingly, Customer should check the above web addresses (or the applicable successor URLs) on a regular basis to ensure that its activities conform to the most current version of the use policies.”

Comcast terms of service are 17 pages long. They just released an update to the terms of service on 10/25/10, which added the following clarifications to their agreement:

  • The Service cannot be resold or otherwise made available to anyone on the Premises or outside the Premises (i.e. wi-fi, "hotspots", or other methods of networking), directly or indirectly, unless done with Comcast’s written approval in accordance with an applicable Service plan.
  • The Service cannot be made available to anyone other than you or your authorized employees or contractors unless done with Comcast’s written approval in accordance with an applicable Service plan.
  • The Service cannot be used to send unsolicited bulk or commercial messages or "spam" in violation of the law.
  • The Service cannot be used to run servers unless you have selected a Service plan which includes a static or statically assigned IP address.
  • If you have selected a Service plan with a static or statically assigned IP address, the Service can be used to host a public website.

It's nice that running your own web server is expressly allowed, and I wholeheartedly approve of the anti-spam proviso. It's worrisome that running any other kind of server is expressly left a little vague, even if you have rented a static IP address. It's clear that - at least at present - I can run my own DNS and chat servers. :whew:.

But, at a stroke, they've banned wifi coffee shops and throwing a 12 dollar cat5e cable over to my neighbor to share internet service. Routing ipv6 around town with my nifty new meshed nanostation M5 radios... looks impossible. I wonder how airstream wireless, in Australia, and the various mesh networks in Europe are managing to innovate around restrictions like these? Do they have similar EULAs to deal with?

For "normal" Comcast cyberserfs, it appears that many (most?) people are violating the Comcast terms of service as they stand today. Anybody that uses dynamic DNS, or a squeezebox server, even a fileserver, seems expressly forbidden now. Is it against the new terms of service to have ANY other kind of server in your house if you don't have a static IP?

There's a special set of rules for teleworkers. I'm not sure what they are, because they aren't online. You have to call to get them.

If you want merely a job interview with comcast, you gotta sign a very restrictive NDA. Without a signature, there's no interview, and there's NO negotiation over the terms. After seeing their just-for-a-job-interview-NDA I'm sure their employment contracts make slavery or working in the food service industry look like a more appealing option.


At least in mid 2007, Google would let you get away with not signing a NDA for the job interview, as C Scott Ananian documented his discomfort at signing that one. He evidently didn't get the job, but at least he can still talk about technology, including google, at a deep level. Others at google seem at least somewhat able to speak their mind, but I've spoken to some employees that have vanished into that company and don't say anything in public anymore because they are frightened of their employment agreement.

Google maps' terms of service are not horrible.

THEN there's google's terms of service for using their search API, which I'll talk to at another time.


Craigslist terms of use, um, makes my brain dump core.

I'm aware that the vast majority of provisos in all these agreements are rarely enforced, and are there primarily to cover corporate arses, but I'd like there to be AT LEAST the following things addressed in America's corporate culture:

1) All basic legal agreements (interview, employment NDAs, employment contracts, EULAs) posted online as a condition of being allowed to function as a corporation...

2) There be some attempt at thoroughly vetting these so people like me, without full knowledge of the law, do not have to be attached to a lawyer at the hip everywhere we go...

3) and some attempt at commonality, of thorough vetting by independent organisations - that would reduce the reams of paperwork regarding using any new service or signing any corporate contract to something sane. I'd like STANDARDs for contracts, in other words. A consumer reports for contracts. Something like that.


4) I'd like, by law, that ALL NDAs in particular, should EXPIRE in a reasonable time - like, 3-5 years - except where national security is concerned, which should be more like 10-20 years.

5) Contracts should not be able to be changed on a whim and forward updating changes accepted as part of the contract.

IANAL - but it's my hope that the current legal landscape already includes point 5. It did, when I last reviewed the law a decade and a half back, or so I seem to remember. Taking some of the steps above would reduce the asymmetry of power between employer and employee.

I run Linux, exclusively, these days, because the terms of service in the Microsoft EULA are unacceptible to me, as are most Microsoft based products. I understand how the various licensing schemes Linux uses work - there's only about a dozen - very vetted by various court decisions - the GPL and LGPL, and BSD licences, I rely on, and trust.

The one piece of commercial Linux software I have - fully paid for - is the cepstral speech synthesizer. After reading their agreement today, it looks like I can't distribute an example of the cool way how I use speech synthesis with my email notification system, which is too bad, I've been meaning to do that...

I don't know if there was an age, ever, where legal agreements so cluttered one's mental landscape. All I have is a distant memory of loyalty oaths during the McCarthy era. Now, THAT, was a simpler time.

After having my eyes glaze over on the first 120 pages of agreements today I decided to not look at blogger's and just post this piece. I remember reading blogger's EULA 8 years ago, and it didn't have a clause in it that required I read it again, then.

And, after retrospection, I decided to repost the same content here, on my new blog.

All my NDAs, except one of dubious enforcability, have expired. I'm glad of that. The EULAs though, are beginning to bother me.

Saturday, December 4, 2010 Tags:

I'm not big on cloud computing. This is amusing in its own way because I built many services around massive clusters in the late 90s and early 00's. On the desktop I was an early advocate of SMP processors (1991), and felt (rightly) as it turned out, that multiple CPUs there made a big difference in interactivity and robustness.


1) Who the heck needs all their information OUT THERE, rather than IN HERE?

2) The massive efforts at parallization going on in the clustered database field - stuff like BigTable - discard what we've learned from decades of relational database design.

3) Starting up whole instances of entire multi-user operating systems to run single applications seems like overkill. What was so wrong about multi-user OSes and the schemes we had in place to regulate their cpu usage?

I'm far more interested in what can be done using resources that are on the edge of the network, on devices that are cheap and fast, using code that is designed to be fast. Every home connected to the Internet has a perfectly capable, always on, computer attached to it, called a wireless router. Many people keep a small NAS around, too.

I get excited about new, low cost, low power chips that can be embedded on devices inside the home, or on your person. For the last year I've been hacking on an openrd box - built around a 1.2Ghz arm processor. I have it doing genuinely useful stuff - it's running this blog, DNS, a web server, a chat server, email, and storing about 3TB of data for an infinitesmal cost per joule, and a one time acquisition cost of a little over a hundred bucks.

That little box eats 11 watts (I plan to replace it with a guruplug, next week, which eats 5 watts) - the hard disk, when spun up, eats another 6.

It's WAY faster than the wireless router it replaced. The local DNS server smokes the ones supplied by comcast. The web server serves up content 10x faster and 1/6th as latent than anything on the internet.

Downloads via the old router never cracked 11Mbit (even through the wired interface), now - the internet runs at 24Mbit, and via wireless, inside the house, it's pushing 150Mbits or more at sub 60 ms latencies.

I know it would be hard for "normal" users to use the openrd box (it's running debian) but I sure would like to see efforts being made to make immensely powerful devices like this - under your full control and in your home - easier to use.

But all the development money seems to be being sucked into the cloud. Progress on making things like the guruplug and related "plugs" non-hacker-only devices has been really slow.

For the past 6 months I've averaged 2 emails a month from recruiters from companies doing stuff in the cloud. They all look at my cluster experience and get hot and horny...

EVERY last company doing cloud stuff, at least in America, has people working for it that I've never heard of, and very few of those companies are releasing any code that I can run on my own machine. I was shocked to realize that every single piece of code I've been working with lately originated outside of America, actually. It seems like there is a giant intellectual black hole here - code goes in, and never comes out, except as a service.

For me the fascination of computers came from having one of my own that I could control and make do interesting stuff - unleashed, uncontrolled. If I did something to mess it up, it was just my fault, and my fault alone. Running stuff in the cloud scares me, one accidental infinite loop and I'm facing a massive bill, instead of my laptop or openrd merely heating up a little.

I hear, off in the distance, the VC's and their accountants chortling at the financial prospect of me coding and computing in their cloud...

The last cloud company I interviewed with asked me what I would do with 10,000 servers. I said, "Use them as very big and heavy Christmas ornaments?"

  • Underutilized processors

The vast numbers of smart cell phones being sold are in "clouds" of their own, almost entirely cut off from each other, even with CPU's and memory allotments that I would have paid 10s of thousands of dollars for in 2000.

Cell phones are admittedly battery limited, but there is so much MORE that could be done on our cell phones, if only they can be effectively used. It boggles my mind that I can sit an iphone down next to a android phone - both capable of communicating at 54Mbits/sec - and only be able to transfer files directly (via bluetooth) at 64KB/sec.

I'd love it if there was a BOINC client for my phone.

I wish I could switch the cloud conversation over to some other items that matter, namely latency, security, energy use, and ease of content creation, in the hope that perhaps more devices and services could exist at the edge of the network and inside the home.

  • Latency

My personal bugaboo is latency. I've ranted on this recently as everything that costs me latency costs me think time. I've gone to great extremes to cut my latency between tasks down by funnelling as many of the web driven applications I HAVE to use - like facebook, and twitter, and chat, into customized emacs-and-process-driven tools.

To get this blog, from the aformentioned openrd box - takes 60ms. From my server in SF - 600ms. If you use RSS to read this - the store and forward architecture of RSS cuts your latency again, to nearly 0. Nearly 0 is good. More than 100ms is bad. Why don't people "get" this?

I am processing over 30 RSS feeds via RSS2Email - scanning nearly 5000 messages per day from various mailing lists - in less than an hour. Via the web, it would take me all day, and I'd get nothing important done. Recently I switched to mostly reading^H^H^H^H^H^H processing RSS via - I can scan ALL the interesting stuff in seconds, just read what I want to read by hitting return, catch up on everything by hitting "c", and then stuff "Expires" so I never have to look at it again.

Search takes a half a second via the web. A Xapian search of my entire dual core laptop takes about 50ms.

I really wish we could move the depth of the web conversation about our ever increasing bandwidth to a good discussion about our ever increasing latency, and what we could do to decrease it. I'm VERY happy that there is an application market for intelligent cell phones - the idea of being tied to a server ALL THE TIME for everything my handheld does is nuts. I LOVE internet radio for example, but 3G isn't fast or reliable enough to stream podcasts or mp3s, so I store those up on the phone via some automated utilities for later - offline - playback. I shudder to think of the day that I'd have to pay-per-byte for data on the darn phone.

(not, that, I'd mind if my cell phone supported X11(NX) based applications, but that's a rant for another day)

Here's a case where the constraints of the cloud make me a little crazy: NetFlix Streaming.

Netflix's video on demand service is great, but the streaming video quality sucks. Netflix HAS A QUEUE of stuff they will automatically send you via mail OR you can stream it. I have about 84 items in my queue.

Given that I have a NAS, AND that queue - piled up - I could be downloading high quality videos at night, when I'm not using the net for anything else.

I know there are weird legal concerns about you having a cache of videos that you don't "own", but paying the latent price for "streaming low quality videos" or abusing the postal system seems really silly to me. Yes, I could use bittorrent to get full quality videos, overnight, instead of using Netflix.

  • Security

The web's security model is broken. It's just plain broken; it's almost unfixable. Worse, we've been dumbing down all our tools, from desktops to handhelds, to servers, to make them secure enough to run apps in the cloud and AS A RESULT making them less useful to run on our own machines AND sharing our data with people we probably don't want to share data with.

Here's an example: recently I switched from filtering my mail via the crufty old procmail utility and thunderbird's inherent filters, to sieve. Sieve has some nice features - the language is free of regular expressions, unless you want to use them, it's human readable, and compilable - it's much nicer than procmail in most respects.

But - in the name of security - sieve lost one feature of procmail that I used a LOT to make it easier for me to process my email - piping. I can - have, do - pipe email through a bunch of other filters that do interesting stuff, everything from automagically maintaining my bug databases to generating text to speech and chat notifications. I'm almost mad enough about this to start hacking at dovecot to make it "do the right thing", to re-enable myself to do what I've been doing for nearly two decades - processing vast amounts of email into useful content.

The core word here is “process”. I don't “browse” the web. I process it. I process email. I process words and code. Browsing - or grazing - is what herd animals do.

I have a personal email server - D AT DONTSPAMME TAHT.NET. It just gets email to me and email from me, unbelievably fast compared to what it takes to download my email via imap or worse, read it with a web browser.

Anybody remember when having “Your personal webserver” was cool? I STILL run a webserver on my laptop - prototyping this blog, various other services, like rt, as well as a local search engine. What happens when I unplug from the network? Ubuntu AUTOMATICALLY puts my browser in offline mode and I CAN'T get to http://localhost anymore.

My laptop is a PERFECTLY, fully capable web server, far faster (and far less latent) than anything that can exist in the cloud. It comes, setup, out of the box. There are all kinds of cool server applications you can run with a local web server, like accounting or sales related tools. Why turn the network off so completely? Why assume you aren't going to use the web, personally, on your own machine?

My cell phone would be a perfectly capable web server, if only it could register itself somewhere sane with dynamic dns.

  • Safety

Stuff that's inside your firewall and home have CLEAR legal boundries that cloud stuff does not. I am comforted by the fact that my email comes to my house, just like my regular mail does. Same goes for the stuff I have under development. Physical security is knowing only one person has the key to the front door and the computer room.

  • Content creation

There are so many things that respond badly to latency. I don't think anyone seriously thinks art (photoshop) creation, or music production can move into the cloud. Some (non-writers I hope) keep thinking that the act of writing can be moved into the cloud - (and to a distressing extent, that does work, for short, twittery stuff)

A boss once called me an IP generator. He meant it as a complement. As one of the people generating the IP, I'd dearly like more people to be enabling the stuff directly under my hands. There are many wonderful tools - like Ardour and emacs - that can help you do cool stuff, and escape the cloud. It's a bitch being one of the .004%.

  • In conclusion:

I'm not against "the cloud" for when it's appropriate. In fact I have a good use for cloud computing now. My friend Craig does a lot of high-end graphical rendering. The minimum time a typical render takes is about 4 minutes. Big ones take hours. Even if he had to upload an entire 160MB file to a render farm, he'd be able to cut that basic time to about a minute using a 10 machine cluster - and given the ~60 cent cost on 4 minutes of 10 machine's compute hours on Amazon EC2, he could get 3 minutes of his life back - at a bill rate of 95/hr - which he could use for other things. He could get a lot more life back for the big renders.

Unfortunately, the licensing scheme for the maya rendering software renders this idea prohibitive. I've tried a few other renderers, like luxrender, with less satisfying results. Secondly, the time it would take to "spin up" 10 virtual machines for a 1 minute job would probably be well in excess of 1 minute.

The rendering problem I note above is different from most cloud applications in that what I want to do is fully utilize (rent), briefly, a LOT more processors than I want to own, and then return them to normal use. I hope that there is some service out there offering maya based rendering already, that uses some other method besides pure virtualization to apply to Craig's situation.

Hey buddy, can you spare a few cpu cycles?

I'd like to put Amdahls Law up in big red letters and have latency get discussed, seriously and knowledgably, every time someone talks about moving an essential service into the cloud rather than into your own hands. Maybe if there was a cataphony of reporters asking: “how latent is your new service?”, “how does it save me time?”, “will it work when I'm offline or outside the USA?” and “how can I integrate this with my workflow?” at every new cloud based product launch...

Maybe, just maybe, everybody'd get focused again, on enabling people, on their own hardware, to work better, and smarter.

I'm not holding my breath.

Friday, November 26, 2010 Tags:

I find it easier to cope with audio interruptions rather than visual ones.

I am also not into my visual space being even briefly cluttered with little pop-up windows with events like “you have mail from J Random Spammer”, or “Joe Blow is online”, either.

Years ago I bought a good software speech synth (courtesy of Cepstral) that I use for short notifications like that, instead. (more on how, below)

Given the number of interruptions I deal with and projects I'm on, it's difficult to keep focus on what's important.

There are also lots of things that I'd rather do than (for example) pay taxes or bills, and I'll subconciously do ANYTHING else, no matter the deadline.

So, when I have trouble staying focused, I write my top 2-3 tasks into a ~/.nag file and have a cronjob and script keep reminding me verbally about what I should be doing.

The script, say-nag:

SPEECH="nice padsp /opt/swift/bin/swift -n Amy -m text"
#-p audio/volume=9 " # padsp will mix the audio in with the music
if [ -f ~/.nag ] 
cp ~/.nag $TEMP
rm -f $TEMP

Once upon a time I had this script setup as a daemon (the speech synth is a rather large program) but nowadays just running it with nice makes it unnoticable.

The cronjob runs during my work hours. It also nags me heavily towards the end of the work day.

# m h  dom mon dow   command
10 8,9,10,11,14,15,16 * * 1-5 ~/bin/say-nag
30,45 16 * * 1-5 ~/bin/say-nag

Walla! A built-in mom on my Linux box. If I don't have anything particularly demanding on my schedule, I delete the file, so it says nothing.

I have audio notifications like this for important email (or at least I did, before dovecot arbitrarily replaced procmail with sieve), as well as new chats and a variety of other purposes - for example, on finishing an article, I have an emacs function, rtb ("Read That Back") which says the whole piece aloud.

Hmm... I may have to make this be a daemon again...

Wednesday, November 24, 2010 Tags:

I like command line tools like traceroute, whois, groper, man, host, dig, gdict, etc.

A couple years back I'd written a tiny command line client that would let me search google (and a few other search engines) and output their results in plain text, ikiwiki format and org format.

I could also pipe those results into elinks or a speech synthesizer.

I loved it. I could search without leaving my editor or terminal session AND get it into a format I could easily use elsewhere, and track the keyword trails of my own searching, outside of google. (I don´t bookmark stuff, I use org-protocol to sweep useful stuff into emacs, and remember only the keywords I searched on, and the position of the results)

I didn´t want to have to get a license to search, so I had a buddy write a scraper. That stopped working a few weeks later, so I bit the bullet and wrote something that used their API.

That API got deprecated a few months later, and my search client stopped working a year ago. I stopped using it, reverting to the web for all my searches.

Recently google annoyed me with their super-duper-REVENGE-OF-THE-PORTAL-like search...

So I spent my spare time in the last two weeks writing new command line search client, using the newer json API that replaced the previous API, that had replaced my scraper.

I got THAT doing useful stuff again yesterday...

Only to find that google had deprecated THAT API... as of Nov 1st, 2010, and put a 100 query/day limit on the new one....

Today I heard facebook is trying to replace email entirely...

I am failing utterly at finding some form of commenting system for this blog that offers anonymity and privacy and local control of the comments...

I watch agast as myopenid tracks every website you login to... that´s a little more open about my ¨ID¨ than I would like.

Is Orwell spinning in his grave?

Hey, buddy, youse got a license for that question?

Meanwhile, all those other search tools I use daily have stayed the same.

Monday, November 15, 2010 Tags:

If you haven't noticed from all my grumpiness, I recently moved from Nicaragua, where I had two massive 30 inch monitors, arranged in portrait mode, that I wrote on. I liked 'em.

I've had to move back to using a laptop with a secondary display, both in landscape mode, which means that I can't see a whole page of text.

With either screen formatted in 80 columns in a font I can read comfortably... I have about 30% of each screen's space horizontally left over for something else, like chat or my agenda... anything, other than ads! Something, productive... whatever it may be.

Maybe chat?

I'd written some elisp a couple years ago that made chatting with multiple people inside of Emac's ERC client work better in landscape mode, which I just I sat down and dug up:

(defvar last-split-type t)

(defun split-window-p () "Determine if a window can be split"
  (if (> 5 (window-height)) t))

(defun split-window-sanely () "SPLIT A WINDOW SANELY"
  (if last-split-type
           (setq last-split-type (not last-split-type)) ;; globals bad! FIXME
    (if (split-window-p) (split-window)
    (select-window (get-largest-window)) ;; or get-lru-window
    (split-window) ;; If we can't succeed, just keep erroring out

;; Tile all servers

(defun erc-arrange-session-sanely ()
  "Open a window for every non-server buffer related to `erc-session-server'.
All windows are opened in the current frame."
  (unless erc-server-process
    (error "No erc-server-process found in current buffer"))
  (let ((bufs (erc-buffer-list nil erc-server-process)))
    (when bufs
      (setq last-split-type t)
      (switch-to-buffer (car bufs))
      (setq bufs (cdr bufs))
      (while bufs
;; (other-window 1) ;; with this we get a lovely spiral on bottom right
;; (other-window) ;; with this we get a lovely spiral on top left 
        (switch-to-buffer (car bufs))
        (setq bufs (cdr bufs))

What I mostly want right now is the tiling facility I get out of the awesome window manager where you can have a primary window (eating 80 columns!) and two or more secondaries, of any application.

The above piece of code helps a bit, but I'm strongly thinking I'm going to end up turning the darn laptop on its side again. Eventually I hope to pair up a pad of some kind - velcroed in portrait mode to the wall - and a GOOD keyboard, and get off the laptop entirely.

(My dream of switching to working on a pad, whenever a good android based one ships, is in part, why I am so vigorously exploring low resource organizational systems like ikiwiki.

Emacs works JUST great on a 1.2Ghz Arm box - and I look forward, very much, to the day where I don't have to listen to a fan all day and still be able to write, code, and work.

I can't really wrap my brain around elisp today. There's too much C in my head.

Friday, November 12, 2010 Tags:

I started writing this piece this morning to talk about two things - bandwidth - which is pretty well understood - and latency, which is not - in the context of getting better performance out of humanity's synergistic relationship with web based applications.

The problem is the speed of light!

    “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.” - Richard Feynman

Yesterday, I accidentally introduced a triangular routing situation on my network, which effectively put me on the moon, in time and space, relative to google. I was a good 3+ seconds away from their servers, where normally I'm about 70ms away.

It made clear the source of the latency problems I'd seen while travelling in Australia and in Nicaragua, where google's servers (in 2008) were over 300ms and 170ms RTT, respectively.

Everybody outside the USA notices the KER... CHUNK of time they lose between a click to web access... and even in the USA this sort of latency is a problem.

Programmers try really, really hard to mask latency - web browsers spawn threads that do DNS lookups asynchronously, they make connections to multiple sites simultaneously, and they try to render as much of the page as possible as it is still streaming, and for all that, the best most web sites can do is deliver their content in a little over half a second, and most are adding additional layers of redirects and graphical gunk that make matters worse - and all they are doing, is trying to mask the latency that is unavoidable.

It then takes me FAR more than half a second to process all the gunk on a typical web page.

Web based desktop environments have limited utility, despite the accolades they get in the press.

The speed of light is unbeatable. The Net is getting perilously close to the speed of light, and until we come up with a tachyon based networking system, the only way to outsource your desktop is to have the network resources EXTREMELY close to the user, less than 40 ms away, a couple hundred miles at most, and doing that costs. Even 40ms is far too much: your home network and computer typically has latencies in the sub 2 or .2ms range, and significantly higher bandwidth than the Net can ever offer.

A trendy method for lowering search times on is to do a json search (via javascript) on EVERY character the end user types...

This is where living outside of the USA, or on the moon, becomes a problem. I'm going to pick on google here, but this applies to nearly every darn website out there that strives for better interactivity.

Google's front page now issues a http query after EVERY character you type, spitting out a new page on EVERY freaking character. As you might imagine, with a 3 second RTT as I had yesterday, this didn't work very well. Not only do you have setup and teardown of the tcp connection, but the content is fairly big... and there's DOZENS of DNS lookups that are all now taking place on EVERY character you type.

I looked at the packet trace of what was going on. I was horrified. This is what typing the second of two characters does to my (now repaired) network:

And that's only HALF the packet traffic a single character typed into google generates. I couldn't fit it all on the screen! Talk about SLAMMING the DNS server and the Internet! Don't these guys ever leave their cubies?

Especially outside of the USA...

  • Extra content costs...

  • DNS lookup costs...

  • And TCP's three way handshake costs...

  • And web redirects cost...

Not just bandwidth, but MY TIME. YOUR TIME. EVERYBODY'S TIME.

Having an ADHD-causing distraction for every damn character I type is driving me bats. Am I alone in this?

This is why I spend so much time ranting here - and fighting back - Every KERCHUNK of my time lost to all this extraneous BS is time I'LL never have back. Those few hundred microseconds of wait are too long for my focus and too short to form any new thoughts.

I was upset when tv went to commercials every 13 minutes instead of 15.

Now I get a commercial on every character! My screen flashes madly as I type at 120WPM...

The Net is for me, NOT them. I want my web time to be as minimal as possible so I have time for ME, my stuff, etc. I hate sacrificing half a second on every click to everyone else.

I wouldn't mind if there was some other way of keeping score as to your intellectual contributions to the world - even it if it was a fictional currency, like wuffie. Reading and searching should cost you wuffie. Writing - and making searching better - should earn it.


<rant continue="y">

    ‎“Even though the Web is slowly being eroded into the usual consumer-based, mindless dreck that every other form of media is... there are still 65, 534 other ports to play on.” -- elf

I collect odd, obscure protocols that solve difficult problems in interesting ways. The world is a lot bigger than just the web's port 80 and port 443, and there's lots of useful stuff you can have on your own machine or network, as elf alludes above.

Some are still doing new interesting things at the lowest levels of the IP stack - take multicast for example: Dennis Bush's multicast uftp meets a real need for distributing data over satellite links. The Babel protocol uses link-local multicast and host routing to make it possible for users to have multiple links to the Internet and choose the best one, reliably.

There are hundreds of other protocols that people use every day, without even noticing they are using them. I'm not going to talk about ARP or DCHP... or stuff layered on top of HTTP, like json, today.

My dictionary server uses a standardized protocol and runs directly on my laptop. I have multiple dictionaries (english,spanish, eng-spa, spa-eng) installed. I LOVE having my own dictionary integrated into chat via Emacs's erc. Being able to spel stuff correctly while in coding environment is cool too. Web based spell checkers bug me a lot.

I run X11 over the wire, using its native protocol, using one keyboard/mouse for multiple computers. See x2x or Synergy for details. Darn useful tools - and synergy works with macs and windows, too.

Rsync is blazing-ly fast, using its native protocol - it mirrors this website in 1.2 seconds flat. I am not sure why people keep re-inventing rsync or use anything other than for backups.

Git has its own protocol. I've moved all my source code and writing into git. Git lets me work offline in a way that no other source code control system can match, and have GOOD backups.

I use samba (CIFS) for filesharing - for copying large files it outperforms lighttpd on the same openrd box by a factor of about 3 on a gigE network - and, unlike a webserver, lets you copy things both ways and drag and drop - anybody remember drag and drop?

When I can't use samba, I use ssh and sshfs A LOT. I like drag and drop on the desktop. You simply can't do that easily over a browser. You can grab single files but simple stuff like mv * /mnt/mybigdisk (move all the files and directories to this other directory, which happens to be over the network), doesn't happen unless you have a virtual filesystem to do it with, which tends to be slower than ssh and much slower than samba. I remember - :sniff: - when SUN seriously proposed WebNFS a nfs:// url type. Vestiges of many older protocols exist as url types on browsing tools (smb:// and afp:// for example), too.

I would like common filesystem semantics like user based file locking and permissions to be part of the web's security infrastructure, too.

Not that this sort of stuff matters to most people nowadays. Most people share big files by swapping big usb sticks, for example, or upload stuff to web servers via sftp, or some form of a http post. Filesharing has become nearly synonymous with bittorrent, tainted with the stench of illegitimacy, when it's what we used to use LANs for in the first place!

Databases are all client/server and the database takes care of the locking problems so the loss of the old file system semantics is lost on everybody... except me and the system administrators that have to deal with the problems created on their side of the connection. And most everybody runs the database server locally nowadays and has no idea how to back it up, or optimize it.

I ran a dns benchmark recently. My local DNS server (running bind9 on an openrd box) smoked every other server outside my network, including comcasts for speed, latency and reliability.

Now... I wouldn't have a problem with using the web for everything, if these other protocols weren't so efficient and useful on their own. Not just faster - more importantly - they are very low latency, as most of them run on your own machine or inside your network, and give you a useful result in much less than .1 second.

Back when I was still doing GUI design, anything over .1 seconds was viewed as too much latency, and over 3 seconds, anathema. Wikipedia tells me that the web moved the outside limit to 8 seconds. In the USA, wikipedia delivers a typical page in about 600ms. Not bad, but still about 6x more than I can stand, and far longer than what is theoretically required. Other sites are worse, and far worse the farther you get away from the cloud.

Too many forces are trying to put everything useful OUT THERE, on the web, rather than IN HERE, on my own machine. I am grateful for my android which - while it uses the Internet heavily - at LEAST caches my email so I can read it when I'm away from a good signal. Why can't I do that with my voicemail?

Yes, there are compelling advantages to having everything OUT there, on the web, for information oriented businesses. You don't ever have to release your application's code to the public, and go through a release cycle, test multiple platforms, nor support older versions in the field - these are huge cost savings for application developers.

On my really bad days, I think the web, as it has evolved to have ever more monolithic and centralized services like Amazon, Bing, google,, etc., is also partially the GPL's fault. One of its loopholes (closed by the 2008 release of the AGPLv3, which few projects use (yet)) is that you CAN keep GPL based code on your own servers and never release it.

It's easy to leverage and enhance GPL'd code if you run a service, rather than distribute an application. But I've come to believe that the combination of all these outsourced services is not good for people.

Gradually, the money has moved nearly every service formerly done inside your firewall, onto the web. Mucho marketing dollars are expended making these applications shiny, sexy and desirable.

The browser has grown until it is often the only application a person runs. It takes over a full screen, and then you layer dozens of little apps and widgets over it until you have a tiny workspace full of tabs for your stuff, your thoughts, and your life. The screen-space consumption problem with a typical web page is so terrible - and with 1600x1200 screens becoming more common, with lots of spare space simply wasted - VCs are now producing specialized WHOLE browsers, like Rockmelt, targeted at the facebookers and twitterers... perfect consumers who apparently do nothing but chat all day, read ads, and buy stuff.

People with purpose driven lives run something like Outlook, or ACT instead. In my case I do as much as I possibly can, within emacs, using org-mode for gtd and time management.

The browser based desktop is not the right answer. There are huge disadvantages to having everything out on the Web - not just the privacy issues, but the human interface issues, that cannot be solved unless the data is moved closer to the user. Moving everything OUT THERE slows down the thought processes of the internet mind.

More protocols and tools need to migrate stuff to IN HERE.

Worse, moving everything OUT THERE can't beat the speed of light.


In ranting so far today, I've tried to identify and explain the latency and bandwidth problem on today's web in various scopes. I have been carrying around solutions to some of them for a long time, in addition to using specialized, local protocols, like my own dictionary server, using a jabber chat client for facebook and gmail... keeping the web browser a tiny part of my desktop, and all the stuff I mentioned in the early part of this blog entry.

My laptop runs its own bind9 DNS server, usually. It caches a huge portion of DNS, and I USED to configure it to take the dhcp provided DNS server as a forwarder, as to lighten the load on the main DNS servers, until people (like comcast!) started breaking DNS by redirecting you to a web page chock full of ads on a cache miss. Bind9 is a LOT smaller than most virus scanners and more useful too.

I also - until google started hammering it - ran a web proxy server (Squid) for everything. In Nicaragua I used that for additional web filtering and for specialized services that enforced USA copyright restrictions - a US citizen, overseas, shouldn't be unable to access US content - shouldn't he?

Running a web proxy server used to speed things up in the general case - I turned it off yesterday because google was hammering it so hard, and noticed that ubuntu's ever-so-user-friendly implementation of firefox was preferring ipv4 urls over ipv6, for some reason. Sigh. But that's another rant... middling sized corporations and educational institutions still use proxy servers, don't they?

I use a custom little command line search client that does searches via json, that gives me JUST results, on keywords, inside of Emacs. I'll have to get around to releasing that one day.

But all that - and other stuff - are just workarounds for trying to fool nature a little too much, using protocols that are inadequate and using tools that are OUT THERE, rather than IN here.

There's an old joke about the engineer facing the guillotine:

    On a beautiful Sunday afternoon in the midst of the French Revolution the revolting citizens led a priest, a drunkard and an engineer to the guillotine. They ask the priest if he wants to face up or down when he meets his fate. The priest says he would like to face up so he will be looking towards heaven when he dies. They raise the blade of the guillotine and release it. It comes speeding down and suddenly stops just inches from his neck. The authorities take this as divine intervention and release the priest.

    The drunkard comes to the guillotine next. He also decides to die face up, hoping that he will be as fortunate as the priest. They raise the blade of the guillotine and release it. It comes speeding down and suddenly stops just inches from his neck. Again, the authorities take this as a sign of divine intervention, and they release the drunkard as well.

    Next is the engineer. He, too, decides to die facing up. As they slowly raise the blade of the guillotine, the engineer suddenly says, "Hey, I see what your problem is ..."

I can't help but think that DNS is getting overloaded, and TCP is overstressed for the kind of workloads we are giving it today. TCP's three-way handshake and tear-down costs over a hundred ms in the USA, time that could be used for something else...

Now that I'm done ranting I'll hopefully get around to discussing some alternate protocols, after I get the code finished, and more stuff, IN HERE.

More in a week or two. I'll be calmer, too. Probably.

Thursday, November 11, 2010 Tags:

Regular expressions are a devilishly useful mini-language. Every few months I'll identify a place where a regular expression would be useful. If I'm working in a language that supports them cleanly, like Perl, I'll burn the hours or DAYS required to write the one line of code required to use them in the application.

Every high level language has a regex library; all have subtle differences.

This morning, I wanted to transform a string like this:

"<a href=test.html>Test</a>, <b></b> <b>Bold</b> <b>...</b>"

into a string like this: "Test, Bold."

Unfortunately, this week, I'm programming in C. C lacks a conventional string type, so memory management is a problem, but I'm used to that. I don't have the pcre library available (this is an embedded system), but the posix standard regex library is on the system. The root of all regex libraries is embedded in the regex implementaton in C, so figuring out how to use that directly should be easy, right?


The manual page for the regex library is unhelpful, lacking even an example. All the C examples I could find on the web were embedded in other languages and other libraries, too general purpose to extract for my very specific case.

This one fails to compile the regexp entirely, and when I substitute a couple simpler regexes, I fail to get anything good, either. Obviously there's something about posix regexes that's different that I don't understand. Yet. Or, I'm doing something stupid with a pointer. It's hard to tell.

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <regex.h>

// What I want to do is find a set of html tags so I can strip them out
const char *regexstr = "</?(?i:script|a|b|embed|object|frameset|frame|iframe|meta|link|style)(.|\n)*?>";
const char *teststring = "<a href=test.html>Test</a>, <b></b> <b>Bold</b> <b>...</b>";

// And also find any place with more than one dot and eliminate them 
// (but I'm not there yet)

// another example regex to match an email address
// const char *regexstr = "<([A-Z][A-Z0-9]*)\b[^>]*>(.*?)</\1>";
// const char *teststring = "example";

#define OUTBUF (64*1024)
#define MAXMATCH 60

int main(int argc, char **argv) {
    char outputstr[OUTBUF];
    regex_t *pattern_buffer = malloc(sizeof(regex_t));
    regmatch_t pmatch[MAXMATCH];
    int res;
    if((res = regcomp(pattern_buffer,regexstr,REG_ICASE|REG_EXTENDED)) != 0) {
      regerror(res, pattern_buffer, outputstr, OUTBUF);
      printf("regex compilation error %d: %s\n", res, outputstr);
    if((res = regexec(pattern_buffer,teststring,MAXMATCH,pmatch,0)) !=0)  {
      regerror(res, pattern_buffer, outputstr, OUTBUF);
      printf("regex execution error %d: %s\n", res, outputstr);
    for(int i = 0; (i < MAXMATCH) && (pmatch[i].rm_so != -1); i++) 

I guess I'm going to sit here and slowly write this one line of code, ever simplifying, or tracing a few other languages, until enlightenment hits. It would be faster to solve this one programmatically, walking the string for each pattern using something like sscanf, actually. But a regexp is “the right thing”. It's definately a monday. Lots of searching and thinking to do today... for one line of code.

For all I know, C library regexps don't do unicode, either.

Monday, November 8, 2010 Tags:

I'm trying to make this site be speedy, look decent, and have functional navigation on handhelds and big displays. I WON the speed battle. The other two are going to require compromises. Still, I want to stay fast, so I'd like to stick to using characters rather than images, for navigation.

It's been a long time since I played around inside of unicode, and I just had a little fun over at's site, fiddling within my character set, looking for useful symbols.

What I'd like to have is universal symbols for Login, Logout, Edit, Comment, Help, Backlink, Next, Previous, and Search, with mouse-overs for what they mean.

Using UTF-8 for my web text is neat, although until today I mostly just used it for Español. Ikiwiki doesn't mangle UTF-8 on the page; neither does elinks. I wonder what a speech synth will do to it?

… ⋱ ⋮ ᠁ ຯ ⌨ ⍇ ⍨ ⅗ ← ☂ ☻ ⚓

… ⋱ ⋮ ᠁ ຯ ⌨ ⍇ ⍨ ⅗ ← ☂ ☻ ⚓

… ⋱ ⋮ ᠁ ຯ ⌨ ⍇ ⍨ ⅗ ← ☂ ☻ ⚓

… ⋱ ⋮ ᠁ ຯ ⌨ ⍇ ⍨ ⅗ ← ☂ ☻ ⚓

So of what I'm looking for, I basically have permalink (⚓), like (☻) and unlike, and link.

Watching unicode develop over 30+ years was worse than watching paint dry, and at the same time, fascinating. People from all over the world worked towards adopting a universal representation of every character that could ever be formed... and succeeded.

After looking over a bunch of other fonts today (I use the hungarian ellipsis (....) which as a "᠁" doesn't work on this font) when I write, which is a bad habit, actually - and thousands of glyphs, I have renewed empathy for font designers. I wonder what this page will look like on a droid? an iphone?

I remember the battles in the mid 90s for downloadable fonts. The font designers wanted some way to 1) Get paid for their work and 2) Get their fonts used on the web.

What happened was everybody (else!) agreed on using a dozen different, basic fonts, and the font designers saw their often innovative work get relegated to being incorporated in graphic elements, only.

I don't know anybody that uses font servers anymore. It's kind of a shame things worked out that way. I remember running a cool font server under X11 (xfd!) for my own work and having oddball fonts on my console displays just to mess people up. Those were the days…

Other cool symbols:

☁ ☎ ☚ ☛ ☝ ☕ ☞

☠ ☣ ☯ ☸ ☹ ≋

Unicode even has the planets! (♃ = Jupiter, ♄ = Saturn), astrological symbols (♌ = Leo), musical symbols (♬), universal recycling symbols (♲, ♷, ♼), and symbols for married, divorced, and couples living together! (⚮, ⚭, ⚯ )

I am sure I'll find something ≆ what I need, eventually.

Sunday, November 7, 2010 Tags:

The flaws of the "web as desktop" model really became apparent to me in the third world. The lack of bandwidth and incredibly long RT times made using many websites almost physically painful. Local apps and local storage work a lot better.

Using the web for everything requires always-on - and global - connectivity, centralized servers, and advertising as a revenue model. The web security model is fundamentally borked, and I hate that 40% of my "desktop" is dominated by branding for every "product" I use and every page I read. I hate that the default colors for all the web apps are blinding black on white, where I prefer green on black.

I get my news and blogs via RSS so I can read them my way, in my own tools. I was stunned to see chromium (a google chrome derivative) didn't automatically recognise RSS feeds, and figure it's part of a conspiracy to keep people from using anything BUT the browser... I'm glad RSS exists, but I really miss netnews. I'd really miss RSS, if it went away.

Interfacing with the web is full of endless ADD-causing interruptions. Also while playing with chromium's (otherwise excellent) web analysis tools I saw ads on the web pages I visit frequently for the first time in the years since I installed adblockplus. Yuck.

Almost every search site out there now requires a redirect FROM its results to the actual web page, slowing down my mind, once again. Why can't I just cache the search results and have my own personal keyword to web page database running on my own box and share my thinking just with my own trust/peer group?

I HATE writing stuff on the web. I write in a professional tool - emacs - honed by 20+ years of hacking to be excellent for all forms of writing, not the casual sort of writing that the web encourages. I turn OFF my spell checker until after the initial burst of creativity is done, and have to use a custom spell checker that supports both spanish and english, regardless. The spell checker is a keystroke away. I also have a dictionary and thesarus running on my laptop that is a keystroke away. My rfc database is a keystroke away. When posting to an online system of any sort I used to use a plugin for my editor - and I hate losing control of my content to their idea of a revision cycle.

It bugs the HELL out of me that there's no laptop with a portrait, rather than landscape display. And why do all the mainstream window managers have a "Maximize window" mode and not a "Maximize vertical space" mode? Most web pages look like crap at 1600x1200, I want room left over for my other applications....

I know I'm swimming upstream, but what's so wrong about:

Wanting to be able to work offline - carrying my drafts and final output with ME, always - to be in a field, somewhere, writing not even connected to 3G - or wanting to be doing complex, interesting things like making music with ardour or protools, creating content without a commercial break?


Preferring a BIG, empty screen or two for my work, my branding, my stuff? I don't have enough pixels to spare for ANYTHING else. I turn OFF the vertical scrollbar on every application I can. Fairly often I switch to a GREAT, innovative tiling window manager, to just get a few more pixels BACK, for ME. I can use them for BIGGER FONTS, for my old tired eyes. Every web graphic designer out there seems to be using 10 or 12pt fonts. I can't READ them. Reading stuff on an android - with the exception of this blog - is an exercise in futility, with all the zooming and scrolling around.


I LIKE that my chats are private, mediated by otr, and the only logs are kept on the correspondents' machines.

I LIKE that my email (used) to work the same way, and email was integrated into my editor. Heck, I liked it when the standard transport for every device I had - when they had a problem - was email - that didn't require always-on-connectivity.

I'd like to be able to make a phone call without going through skype, or a browser. This week, I've experimenting with linphone, which supports p2p ipv6 calling, and video at higher framerates and quality verses what I can get from skype. Most sip client work INSIDE my network, rather than requiring round trips to the Internet just to talk to my room-mate downstairs. The whole sip based telephony market, IMHO, may be posed for a comeback - with Freeswitch long having IPv6 support and asterisk 1.8 just having released it.

I USED to collaborate on writing code - via X11 - using emacs's handy-dandy multiple display support. Web pastebins are VERY useful but it used to be so much easier to just slam the editor up on the helper's screen.

I still use multiple display support when I have multiple machines on my desktop.

Yesterday - ubuntu announced that it might be abandoning X11. Sigh. I've ranted about the advantages of network transparency elsewhere. Heh. I just re-read that rant. It's STILL spot on. I regularly support a friend that works out on an oil rig in Mexico, and (despite latencies in the 1 second range and a really hard firewall to get through), I fix his problems using ssh and X11 (NX). I don't know how I can help him if X goes away. VNC is unusable over this link. So is the web. SSH “just works”.

My aging nokia 770 handheld - 7 years old now and still ticking - lets me drag and drop files to it over sshfs. I keep it up to date with interesting stuff with rsync. It has a webserver. My shiny new android requires I plug it into the usb port, no ssh, no rsync, no webserver. Why is that? I can have an iphone and an android sitting right next to each other - capable of transferring files at 54Mbit per second - and they can't talk to each other! Both can talk to my old nokia though...

I will be radical here, and just quote elf:

    “Even though The Web is slowly being eroded into the usual consumer-based, mindless dreck that every other form of media is...there are still 65, 534 other ports to play on.”

In short, I'd like it if the browser was just a small component of my desktop, and other interesting applications had space to breathe.

I can't be alone in wanting the world to look more the way I want, can I?


  • oh! a commenter! (via facebook!) Yea! (Why must my comments vanish into a silo like facebook? I'd have lost the interesting, off the cuff concept of the Interplanetary Overthruster if it had vanished into the facebook silo)

Chipper comments:

Just curious, what are your goals? Why are you spending so much time and effort on IPv6 as a for instance? * Because you intend to be a network administrator when you grow up? * Because you intend to be a system administrator when you grow up? The whole idea -in my mind- of having a home/business account with a static IP, is that it's so easy to apt-get install $whatever and just DO IT.

Yes, you can still get a static IP in the USA. But it's 1) now costing an extra 300 dollars/year, and 2) you can't easily get one elsewhere. This problem is only going to get worse.

The way I'm doing things now I have total control of the stack AND my servers, and can innovate with them. I CAN swim upstream. What those things are, I don't know, and that's the fun of it. Maybe I'll come up with something highly desirable, maybe not.

And maybe, one day, whatever that may be will be easier for non-system-admins to deploy.

Don't you remember the days where every sysadm got enthused about the Internet and moved mountains in order to make it work for you, your grandma, for everybody? We dragged cables across the floor, silently cross connected cables in server rooms without telling our managers, and MADE IT WORK because we thought it was a good thing.

Doesn't it strike you as odd that “social networking” requires sitting at home, alone, with a computer? I remember the good old days at “Cybernation” where people went OUT to get on the net, and they played games, together, in the same room. There's all sorts of ways people could be collaborating together, in the same workspace, not over the web, yet still mediated by compuers, like what the Collide Factory is doing.

I've always wanted a jamophone - where I could play music with my neighbor down the street, over the net. Can't do it, using ipv4 and Nat.

But, it seems to me, that you are spending all of your time and talent reinventing wheels that were built really well decades ago.

Not all, but some! I really dislike dhcp, and am not fond of dnsmasq, either, as examples.

I go read NeX-6 every day loyally. All I get is "I reinvented another wheel, soon I'll be able to reinvent another wheel, and once that's done, I can reinvent another wheel, and that will allow me to reinvent all these other wheels, so that I can write.

I HAVE been (conciously) trying to rebuild the Net from layer-1 up. While in Nicaragua I went back to my very roots in networking and tried to look at it all with a fresh mind, to see what worked, and why, and where we went wrong, in the hope that I can find an interesting and useful path forward (see elf quote, above).

The biggest problem, as I see it, is the latency inherent in the current Internet model of “mediators for every useful service” - example: I'd like it if (one day) twitter didn't have a 140 character limitation and was (optionally) P2P over IPv6. Groups would be doing their surface thought exchange in under .1 seconds, instead of minutes. Having all the men in the middle, as we do now, slows the internet mind down, drastically.

I am delighted that I have retained at least one dedicated reader since switching to this blog format. Do you use RSS or the web?

Why not just write? Lookie here, I Just wrote this, without having to install anything. Shame I cannot comment on yer blog. Kinda the fun of blogs is the ability to have readers comment, which is also a big part of the not-fun side.

I totally agree that the comments is the fun part - the whole yelling back at the TV part - and I'm working on fixing it. The Myopenid link on the comment posts is currently working, but google and yahoo "openids" are not. I'm not satisfied with the security model for any commenting model so I'll keep fiddling. Stay tuned.

Some anecdotal proof that the latency of the web was bad for my writing process:

After switching to this new, ikiwiki and git based format and local blogging method, I'm having a LOT less problems spewing out 1000 words an hour, which is what I just did.

My revision cycle (2300+ words total, including yours - is there a word-count feature on blogger? on emacs it's meta-word-count) took another hour, which is WAY shorter than what it was.

I used to pound out emails really quick, too, and handle thousands (seriously) of emails a day by pre-sorting them into appropriate buckets using procmail. Using gmail screws up that system for me. I'm going to put email back into emacs and see what happens.

Chip wrote in again:

And who uses linphone?

SIP interoperates with a lot of other sip phones. I'm only using linphone because of the on-going ipv6 experiment.

what the hell is p2p collaboration? And is there some huge shift in the 'lolcatz' paradigm that you are seeing?

Yea, my question exactly.

I see less than .004% of the general internet population creating content, and that may goes as high as .004792% in 5 years. Assuming we have 5 years of course. That said, the movers and shakers in the wide world of bandwidth are exactly the folks we hoped/dreamed/and actively did our level best to hobble. Prolly, here in teh US, 90% of bandwidth is being managed by the comcasts of the world. Come up with the most brilliant IPv6 deployment plan on earth, unless comcast adopts it, it won't really matter a whole hell of a lot.

They are rolling out stuff now. We'll see how it goes. The world is a lot bigger than comcast6. And yea, some of what I was trying to do with the wisp6 concept was to KEEP trying to hobble and advance faster than the central providers. I HAD a 2TBit backbone for 300Mbit p2p links for networking in Nicaragua and had hoped to come up with uses for it. Anybody can now build something like that, anywhere, much like Airstream did in Australia.

And at least, today, 6in4 is feasible for everyone.

I don't think. Hacking IPv6 linux-ee stuff is cool, and I'm glad folks are doing it, but it's not in broad deployment, and won't be in broad deployment any time soon, just like last year, 5 years before that, and 5 years before that.

I do think on long timescales. I adopted the Internet (well, usenet) back in 1985, when people were still using Fidonet and BBSes.

And since it's not in broad deployment, and working on it can't bring your personal economy up as well as working as a clerk at the local Bradly Food Mart/Sinclair station will, I question the effort. And does linphone IPv6 p2p collaboration support DTaht writing stuff?

Working on it now is bleeding edge and satisfying in its own way. Secondly - at least so far as ipv6 is concerned, it is very easy to deploy using 6in4 tunneling, on a static ip address, from just about anyone.

You only have a few years left, maybe a few more, maybe a few left, what do you want to do with that time? Just asking, to me, that's the goal.

That's a damn good question that I continue to struggle with.

PS, if you intent to reply from a p2p IPv6 hand-held smart phone-ee device, don't bother.

Write a song, (no more Rhysling for at least a year) or write an article about how YOU see the $election (which I see as meaningless). Write an article about SOMETHING someone OTHER than the useless drivelling hordes of slashdotters will care about.

I'll paypal you $5 to do this.

Chip's last point is kind of cool. I'd like to find some way of getting paid in some other currency than wuffie. I am totally unsure that the web is a place to do that, anymore.

Friday, November 5, 2010 Tags:

IPv6 has a wonderful feature called stateless autoconfiguration, by which every machine on a participating IPv6 network can get a valid and unique IP address.

Theoretically, this eliminates the need for DHCP, but it doesn't. There are no provisions for passing useful, and usually required, additional information along in the IPv6 autoconfiguration packet.

You can't get your ntp server, a fixed ip address, web server proxy, wins server, domain name, extra routes, or any of the other things normal DHCP for IPv4 provides - via IPv6 autoconfiguration. You can't get your default routing protocol out of DHCP for ipv4, either, as best I recall, which really bugs me.

MUCH internet heat was expended on merely getting a default DNS server incorporated into the RDNSS record, and it wasn't until a year or two ago that you could get DNS out of the radvd demon and kernel under linux.

While it would be possible to dump a set of additional autoconfiguration records into the DNS backend, most routers ran a really limited DNS server that would be kind of difficult to extend (and protect).

Back in 2007/8, I didn't like DHCP6 as an option, either, it was big, bloated, slow, and incomplete, and both the DHCP servers I tried back in 2007/8 were flaky.

While casting about for a sane means of full autoconfiguration while working on wisp6 I hit on an idea so simple, so clever, so audacious, so modern that I wondered why nobody else had proposed it. I had (and still have) one of those "I must be crazy" moments...

Every router and nameserver nowadays has a webserver. Most have the curl library for updates, and openssl is more or less a requirement - so why not serve up the additional information with a sanely designed - and fully extensible - text protocol, like json?

The requirement on the client machine(s) are curl (maybe openssl), with the addition of a tiny json parser like jansson. A tiny change to the total architecture of the internet... ok...

So I decided that my clients would look at their local ra-provided router table and RDNSS servers, and attempt to fetch their configuration via http from those devices.

There would be no need for the complexity of a binary protocol like DHCP6, or a new one like AHCP - the router (and name server) have got everything on it you need already, and json would let you do what's required in a simple, well defined, text file format, over a standard protocol that already exists, that already had extensible security features like certificate based authentication.

This is where one of those places where the ietf needs to step in, as the code I got working was pretty rough, and the details got a little messy. (I also have a piece of crap json parser). Over the next months, I'm going to work on well-defining the mechanisms I designed. I took this idea really far, extending it all the way to a concept of automatically generating 4in6 tunnels, thus eliminating the need for IPv4 entirely on anything but the client devices that actually needed IPv4.

I've been running bits of this code for forever. This is what my laptop looks like today. IPv4 IS NOT running natively on it:

    d@cruithne:/etc$ ip addr
    1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet scope host lo
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 100
        link/ether 00:1c:25:80:46:f9 brd ff:ff:ff:ff:ff:ff
        inet6 2002:4b91:7fe5:2:21f:3bff:fe2d:dff5/64 scope global 
           valid_lft forever preferred_lft forever
        inet6 fe80::21c:25ff:fe80:46f9/64 scope link 
           valid_lft forever preferred_lft forever
    3: wlan0:  mtu 1500 qdisc mq state UP qlen 1000
        link/ether 00:1f:3b:2d:df:f5 brd ff:ff:ff:ff:ff:ff
        inet6 2002:4b91:7fe5:3:21f:3bff:fe2d:dff5/64 scope global deprecated dynamic 
           valid_lft 48506sec preferred_lft 0sec
        inet6 fe80::21f:3bff:fe2d:dff5/64 scope link 
           valid_lft forever preferred_lft forever
    4: pan0:  mtu 1500 qdisc noop state DOWN 
        link/ether 5a:c5:ad:65:0b:4d brd ff:ff:ff:ff:ff:ff
    5: ip6tnl0:  mtu 1460 qdisc noop state DOWN 
        link/tunnel6 :: brd ::
    18: laptop:  mtu 1280 qdisc noqueue state UNKNOWN 
        link/tunnel6 2002:4b91:7fe5:8::2 peer 2002:4b91:7fe5:8::1
        inet scope global laptop
        inet6 2002:4b91:7fe5:8::2/128 scope global 
           valid_lft forever preferred_lft forever
        inet6 fe80::21c:25ff:fe80:46f9/64 scope link 
           valid_lft forever preferred_lft forever

And my routing table:

    d@cruithne:/etc$ ip route dev laptop  scope link  metric 1 dev laptop  proto kernel  scope link  src dev laptop  scope link  metric 1 
    default via dev laptop 
    d@cruithne:/etc$ ip -6 route
    :: via fe80::215:6dff:fede:fc11 dev wlan0  proto 42  metric 1024  mtu 1500 advmss 1440 hoplimit 4294967295
    2002:4b91:7fe5::1 via fe80::215:6dff:fede:fc11 dev wlan0  proto 42  metric 1024  mtu 1500 advmss 1440 hoplimit 4294967295
    2002:4b91:7fe5:1::1 via fe80::215:6dff:fede:fc11 dev wlan0  proto 42  metric 1024  mtu 1500 advmss 1440 hoplimit 4294967295
    2002:4b91:7fe5:1::5 via fe80::215:6dff:fede:fc11 dev wlan0  proto 42  metric 1024  mtu 1500 advmss 1440 hoplimit 4294967295
    2002:4b91:7fe5:2:215:6dff:fedf:f65d via fe80::215:6dff:fedf:f65d dev eth0  proto 42  metric 1024  mtu 1500 advmss 1440 hoplimit 4294967295
    2002:4b91:7fe5:2::/64 dev eth0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 4294967295
    2002:4b91:7fe5:3::1 via fe80::215:6dff:fede:fc11 dev wlan0  proto 42  metric 1024  mtu 1500 advmss 1440 hoplimit 4294967295
    2002:4b91:7fe5:3:215:6dff:fede:f65d via fe80::215:6dff:fedf:f65d dev eth0  proto 42  metric 1024  mtu 1500 advmss 1440 hoplimit 4294967295
    2002:4b91:7fe5:3::/64 dev wlan0  proto kernel  metric 256  expires 0sec mtu 1500 advmss 1440 hoplimit 4294967295
    2002:4b91:7fe5:8::1 via fe80::215:6dff:fede:fc11 dev wlan0  proto 42  metric 1024  mtu 1500 advmss 1440 hoplimit 4294967295
    2002:4b91:7fe5:8::2 dev laptop  proto kernel  metric 256  mtu 1280 advmss 1220 hoplimit 4294967295
    2002:4b91:7fe5:ffff::1 via fe80::215:6dff:fede:fc11 dev wlan0  proto 42  metric 1024  mtu 1500 rtt 0.00ms rttvar 0.00ms cwnd 5 advmss 1440 hoplimit 4294967295
    fe80::/64 dev wlan0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 4294967295
    fe80::/64 dev eth0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 4294967295
    fe80::/64 dev laptop  proto kernel  metric 256  mtu 1280 advmss 1220 hoplimit 4294967295
    default via fe80::215:6dff:fede:fc11 dev wlan0  proto 42  metric 1024  mtu 1500 advmss 1440 hoplimit 4294967295
Thursday, November 4, 2010 Tags:

In light of the recent hoo-rah about the Firesheep plugin, I've been reconsidering my approach to site security.

About firesheep:

On a wireless lan (or a lan that still uses hubs, rather than switches, if any still exist) if you browse many popular web sites, you can't trust your neighbor on the lan to not be sniffing your traffic. Firesheep demonstrates that problem quite adaquately!

But who is to say you can trust a centralized service either? Can you trust your trust group, on facebook? NO! Not currently. You've shared all that information with a central entity with MILLIONS of users and tens of thousands of customers - that (most likely) doesn't have your best interests in mind. I think that is MUCH worse that your geeky neighbor in a coffee shop sniffing your traffic.

The way to fix the firesheep problem is to secure the web - first - by using end to end encryption - and the second, is to limit your trust group to only people and organizations you can actually trust.

The big disadvantage to crypto is economic:

Big fat servers in the cloud slow down a LOT when the transaction is encrypted. Using p2p crypto drives up the costs in the cloud. It also annoys the NSA, and everybody else that wants to be a man in the middle - google, facebook, twitter - etc.

On my bad days I look at the modern “highly personalized” web as one massive man in the middle attack, actually, by everybody that wants something out of you (which is just about everybody offering a "free" service). I applaud Doc Searls's VRM efforts for this reason (but while he flies at 40,000 feet, I'm working at ground level trying to make the darn concepts work).

Peer to peer applications using IPv6 have a lot of potential to mitigate this problem, but so long as we have really insecure OS's like windows lying around, and users that have no concept of security or an easily implementable trust model, going PTP may introduce more problems than it solves.

Still, once you exit the cloud, your privacy options become more robust. If you are running your own web server for your content, you generally have a LOT more cpu available for cryptography. The open-rd I use has a built in (and under-supported, sadly) crypto engine that might help.

In my case I use several external applications that make my life a little more secure. For chat - now that facebook supports jabber - I use pidgin or emacs + bitlbee (version 3.0 just released!) with the otr cryptographic extension. It's good stuff, bitlbee especially lets me move all my chat accounts into one system, so I can run one application to chat, and one application only.

I run irc over a SSL connection, too. I even run my own jabber server. Conversations within my household never leave it. You can't get much more secure than that, but there are still plenty of other places where personal information can leak.

My laptop has a secure directory for my passwords. I just started using myopenid, I'm not particularly fond of outsourcing that idea right now, but haven't looked into how to run my own open-id server (yet!).

My email used to come directly to my laptop, over IPv6, over a secure link. I haven't gotten around to re-implementing that, but I will soon.

On the web, I try to be careful. I change my passwords frequently, and try NEVER to log into an important service from a public terminal. (keyloggers are far more scary, and common, than stuff like firesheep) It bothers me a lot that skype's security model is kind of unknown.

I've decided I'll make this blog and wiki use https throughout, by default, for editing, at least. For everything, if I can swing it. I have plenty of spare cpu for cryptography throughout the system, now that I've got rid of the dynamicism and database dependence of a conventional design. I think.

Years ago John Gilmore started the Freeswan project with the express goal of securing 5% of internet traffic in under 5 years. He failed - but vpns such as freeswan, strongswan, and derivatives are everywhere, and the idea of opportunistic encryption for ALL traffic remains doable - if enough people can agree on standards.

There are a few other old - and very secure - ideas, left unimplemented in modern days, that I may try also. The first is pgp for email, and perhaps other applications that need a trust group. The second is ssl browser certificates for login...

Wednesday, November 3, 2010 Tags:

I've been trying to design the CSS and templates this site uses to be usable at the resolutions the droid uses, maximizing the available width and height to be readable, and yet still have useful navigation.

With something like 200,000 activations/day for the droid, and the iphone running a little ahead of that, you'd think more people would be targetting content for these devices.

But so far, in my design, not a lot of luck. Screen space is at a premium. I sure would like Fitt's law to apply to navigation, a drop down css menu bar that used a single pixel at the top would suit me fine.

For a while I was just using my droid to look at the output, but I installed a droid emulator yesterday and maybe I can automate pushing out web pages to it.

I did roll a little script to make adding posts easier:

# pc - post create


# Put in underscores
FILE=`echo $* | tr ' ' '_'`

cp templates/dave_template.mdwn posts/$FILE
git add posts/$FILE;
emacsclient $BLOGDIR/posts/$FILE
cd $P

I'm going to branch off the repository and use that for the styling and templating issues while I work on getting the content into the site, maybe make a mockup or two.

Not that it matters for the blog. I think the best way to read blog content on a handheld is with an RSS reader. I just haven't found one yet I liked. Both Feedsquares and Newsrob depend on google reader and the last thing I want to cope with is having google in the middle of that transaction, if I don't have to.

Wednesday, November 3, 2010 Tags:

My vote is for better capacity planning... but from the early returns it looks like the only legal dope you'll be able to get in California will be whoever wants to run the place.

Tuesday, November 2, 2010 Tags:

The openrd I've been using for development is the forerunner of a much smaller, and cheaper home server than anything that has ever existed before. The various Plugcomputers contain the first embedded processors I've seen since the Strongarm (way back in 1998) that have a nearly fivefold jump on embedded processors designed before them.

(There are several others - notably the chips used in the android and iphone cell phones - that represent a similar quantum leap, but they don't have ethernet)

The Strongarm chip, running at about 200 Mhz, back in 1996, blew away the 33mhz 68xxx processors that came before it, not only in speed but in power consumption, and obsoleted PalmOS almost overnight. Strongarm based handhelds caught Palm (and me!) by surprise. PocketPC took off in 1999-2001, not just due to the OS, but due to the chip that it ran on.

For a brief while handheld Linux was competetive, too, during that period.

Palm struggled mightily to catch up. They never really did.

I feel much the same excitement, now, when I hack on my 512MB ram, 1.2ghz, 5W arm based open-rd as I did when I got my first strongarm based handheld 11 years ago.

The marvell "kirkwood" chip in it, and the various plug computers - can run multiple sata busses, 2 GigE ethernet ports, a pci interface, multiple USB interfaces, has on-board encryption, and a ton of other features. It is also REALLY well supported by multiple versions of Linux. It "feels" very fast for an embedded system - especially since the fastest embedded chips I had been working on before it ran at 400Mhz or less.

For 129 dollars (and a little effort) you can build one heck of a great home gateway. A few days after I got an open-rd and got debian running on it, it replaced both my home file server and internet gateway/web/email server. It's been incredibly reliable - this website, blog and wiki are run on it!

I'd like to get another so I can hack on the kernel some... maybe get the hardware crypto working... the guruplug server has the most promise to me - with a decent antenna on it it could (maybe) compete with a conventional wireless router.

In particular, I have found bind9 to be a lot - noticibly - faster in practice than the DNS servers in most of the wireless routers I've used recently. A fast, local DNS server + GigE ethernet, brings a qualitative improvement on the home Internet experience.

I've been trying to find a set of ideas that will bring a fivefold improvement in the home internet experience with the pocobelle2 project. Me being me, it's difficult to put the ideas into words that others can understand at this stage of development, and I admit to be mostly going down a difficult path (enabling ipv6) rather than trying to create a useful-for-the-masses product at this point. (I'm not actually trying to create a product, I'm trying to get to where a product is creatable)

The Amahi folk are doing a much better job than I am at describing the core idea AND at creating a useful product.

I haven't tried their stuff yet, being kind of committed to going down my own path, for now... but their materials look pretty good.

Tuesday, November 2, 2010 Tags:

I've achieved most of what I set out to do for the core parts of this new blog. NeX-6 now supports multiple authors, commenting, replication/mirroring, local editing, RSS, git, and of course, a static page design that is really, really fast.

It earns a 98 on Yslow (on apache) and a 94 (on lighttpd). I have to figure out how to make the eTags on the two mirrored servers work together and Expirebytype on lighttppd, but that's it for optimizations on the web server side unless I come up with something cleverer.

Running on my laptop, it's under .1 second to load the main page.

Inside of my own network, the openrd box serves up most content in less than 60ms, of which about 16ms is spent on DNS lookup. Roughly 50ms later, everything renders. Getting the typical blogpage down to .1 second was my goal. I almost got there! I am seeing a paint event out at .5 seconds for some reason...

It's like, real, now.

For comparison, a single page on my main blog (which admittedly has way more stuff on it) currently takes 4.3 seconds to load, and that's in the USA! In Nica it was way worse, in either case, well beyond my personal tolerance level. There's all kinds of stuff in The Bleeding Edge that I'd forgotten about - technorati, google adsense, feedburner, all kinds of javascript css - does any of this stuff do me any good?

Left to do is better handheld support, I just need to increase the font size in certain spots and I think I'll be in business. It'll hurt my yslow score for sure and will require testing against oodles of browsers, too. Just as well add that to the test plan....

During the entire development phase I was running my house network purely under IPv6. I'd talk about that, but it required so much setup that I'll leave that for another day. I enjoyed playing with the chromium browser tools as well as yslow, and hacking on the CSS, etc, but it's time to get to the real work now - getting a ton of org-mode content into a wiki format and fix a boatload of broken links.

But first I'm going to take a walk outside and enjoy the fresh air.

Update: although my local newsreader reads my RSS just fine, facebook is failing to import and I fail a few feedvalidator checks... darn it... twiddling...

Monday, November 1, 2010 Tags:

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.

Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.

You have not engaged in our great and gathering conversation, nor did you create the wealth of our marketplaces. You do not know our culture, our ethics, or the unwritten codes that already provide our society more order than could be obtained by any of your impositions.

You claim there are problems among us that you need to solve. You use this claim as an excuse to invade our precincts. Many of these problems don't exist. Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract . This governance will arise according to the conditions of our world, not yours. Our world is different.

Cyberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.

We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.

We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.

Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.

Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion. We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge . Our identities may be distributed across many of your jurisdictions. The only law that all our constituent cultures would generally recognize is the Golden Rule. We hope we will be able to build our particular solutions on that basis. But we cannot accept the solutions you are attempting to impose.

In the United States, you have today created a law, the Telecommunications Reform Act, which repudiates your own Constitution and insults the dreams of Jefferson, Washington, Mill, Madison, DeToqueville, and Brandeis. These dreams must now be born anew in us.

You are terrified of your own children, since they are natives in a world where you will always be immigrants. Because you fear them, you entrust your bureaucracies with the parental responsibilities you are too cowardly to confront yourselves. In our world, all the sentiments and expressions of humanity, from the debasing to the angelic, are parts of a seamless whole, the global conversation of bits. We cannot separate the air that chokes from the air upon which wings beat.

In China, Germany, France, Russia, Singapore, Italy and the United States, you are trying to ward off the virus of liberty by erecting guard posts at the frontiers of Cyberspace. These may keep out the contagion for a small time, but they will not work in a world that will soon be blanketed in bit-bearing media.

Your increasingly obsolete information industries would perpetuate themselves by proposing laws, in America and elsewhere, that claim to own speech itself throughout the world. These laws would declare ideas to be another industrial product, no more noble than pig iron. In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish.

These increasingly hostile and colonial measures place us in the same position as those previous lovers of freedom and self-determination who had to reject the authorities of distant, uninformed powers. We must declare our virtual selves immune to your sovereignty, even as we continue to consent to your rule over our bodies. We will spread ourselves across the Planet so that no one can arrest our thoughts.

We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.

John Perry Barlow

Davos, Switzerland

February 8, 1996

Sunday, October 31, 2010 Tags:

Fiddling with the css containers is like wet paint. You can almost get what you want with a few minutes of fiddling. Getting an entire page right, particularly from a template generator is hard. I've been at it for days now.

Trying to get vertical space AND keep the content flowing for elinks has been painful. I'm almost there.... display relative.. have to style differently for inline content...

CSS Positioning in 10 steps was REALLY helpful.

Saturday, October 30, 2010

As I like all my external links to be valid, I wanted a linkchecker. The python based "linkchecker" looked like it was perfect for the job, so I installed it this morning and ran it.


It got stuck spawning 10 copies of the ikiwiki cgi. Each invocation of such takes about 10 seconds each (which worries me, long term), so let's NOT check cgi urls:

linkchecker --ignore-url=".cgi"

That took 10 seconds (too long!), so I arbitarily upped it to 400 threads from the default of 10...

linkchecker -t 400 --ignore-url=".cgi"

this pegged my cpu at 110%, but then the local webserver started missing requests and stuff started timing out. 20 threads was also bad... I could tune up apache or try lighttpd at some point. 10, and excluding the cgi, seems optimum.

So then, just for grins, I decided to run this against my original blog still hosted on blogspot. It has - probably - 1000s of broken links.

linkchecker -t 400 --ignore-url=".cgi"

Ha. This (quite!) effectively executed a denial of service attack on my own blog. Blogspot (quite rightly) started throwing 503 errors at me. Don't do this to other services, people!

ID         9979
URL        `' (cached)
Name       `06/27/2004 - 07/04/2004'
Parent URL, line 427, col 5
Real URL
Result     Error: 503 Service Unavailable

Since I develop the Nex-6 blog and wiki locally, I can link check it locally on my own webserver and only DOS myself rather than a site on the internet... but I think fully checking my old blog, on blogspot, for errors, is going to take a VERY long time if I have to limit myself to 10 threads.

So how can I avoid doing a DOS of myself in the future? I can ratelimit incoming connections to the webserver itself with a iptables (and ip6tables) rule, and I can also (using apache) reject large numbers of requests from individual ip addresses. I don't know if lighttpd has this feature or not. The core problem here is that although the main webserver is REALLY fast with the static content, the one cgi in the system is REALLY slow. It takes over 10 seconds to complete. I have that hanging off of another url than the default, so I can probably tarpit or ratelimit requests to it - via IPv6, anyway, at the OS level rather than at the webserver level. Unfortunately that won't work for ipv4 connections as I only have one ip address available.

I have lighttpd running on the (relatively weak) openrd box. Hmm... after I get mirroring setup, let's see what happens if I SLAM that.

The whole post and publish thing was getting to me from a typing perspective, so I wrote a tiny little shell script called pp:

git commit -a -m "$*" && git push

pp this darn blog

thusly creates a commit and pushes out the blog to the local server. ppp (who uses ppp anymore?) pushes the thing out to the main servers.

Thursday, October 28, 2010 Tags:

In realizing that the RSS feed had become the most important part of my old blog, I started looking at the RSS feeds ikiwiki generates. They are not very good.

Darn. I was getting close to being able to publish. Styling was getting close; I'd written some useful content. I may have to go back to zero now and evaluate other blikis. I filed a couple bugs over on the main ikiwiki, like More and RSS generation and Logout in ikiwiki, and I'm trying to remember the other bug I filed...


It looks like I can mangle the output template to do what I want. I want licencing, copyright, a description, subtitle, and a few other specific RSS features.

It would be cool if I could figure out how to generate permalinks correctly, too. Autodiscovery seems to be broken, too... I'd like to have a means of generating and enclosure, too... Then I'll have it...

I figured out that autodiscovery did not work because the rss feed doesn't like not being in the root of the site. This means it will work whenever I go up on the main site(s).

Update 2

OK, I don't regard this as a showstopper anymore.

Along the way I noticed that my local apache server wasn't listening on ipv6 (!!!!?) and that the popular service didn't work over ipv6 at all. I emailed the feedvalidator mailing list about it, and got apache "doing the right thing" again, locally. I'm running an older version of ubuntu on my main server as well as debian, it looks like this is yet another annoying IPv6 breakage in the 10.04 Ubuntu release.

Getting Apache listening on ipv6 again was easy, I just added to /etc/apache/ports.conf:

Listen [::]:80


I installed feedvalidator locally. The feedvalidator code works over ipv6. Yea. I also got linkchecker installed. I swear I had a highly parallized linkchecker 0 this one is taking forever, even running locally....

Thursday, October 28, 2010 Tags:

A long time ago google used to index my entire blog. It doesn't anymore (nor do I think it should), but there are things I came up with that I'd like to remember, but are no longer in google's index for some reason.

Google had been my external memory for a long time. It bothered me a lot that it was forgetting things on me, things like a piece I wrote about epicycles - which coined the phrase "interplanetary overthruster" - I did a search for the phrase on google, it's not there... I swear I wrote it....

Update: I found it! I exported the old blog in blogger's unbelievably dense xml format and grepped for it. It's in there, in a comment, Interplanetary Overthruster... I'd written back in March, 2007:

    "Hah. After watching this animation for more hours that I can remember - I finally came up with a name for Jupiter's gravitational resonance with Near Earth Objects - in homage two of my favorite movies. "Interplanetary Overthruster". I wonder if I can get my own wikipedia entry for that...."

Another piece on Epicycles I found pretty easily. What characteristic made the first piece disappear completely?

Sometimes it's the things you come up with, once, and first, that are the important ones. I've recently, and painfully, learned that.

Google Screenshot Update: Less than four hours after I put this blog up - and blog pinged the world - google's engine found this phrase. Applause to google!

I've coined a few other words and phrases, at least Googlethruster is still out there... but I don't get credit for coining git's usage of the word "porcelain" (which I did while otherwise losing an argument with Linus), either.

If I can't outsource my canon's search engine to google anymore, it's looking like I'm going to have to implement my own search engine again. Using ikiwiki, I can at least grep through it all, but something more complete would be nice to have.

While trying to look through my old blog manually, I find myself missing the concept of a "next/prev" button at the end of each article so I could move forward or backward in time, easily. I wonder if I can do that with ikiwiki? After looking through my hits on the old blog it looks like most of my readers (and commenters) are on facebook these days, so generating good RSS may well be enough. Haven't even LOOKED at the RSS generator yet....

While I'm at it, I'd really like there to be a Cite plugin that “did the right thing” with quotes. The font I'm using doesn't display them as curly... sigh...

Update: After hitting a lot of keys one day with my USA international keyboard I discovered I could get curly quotes. It's right-alt-shift-{ and } for those. The georgia font displays them the right way, too, so I switched to that from Arial. Georgia was a bit smaller than I liked, I fiddled with that, too.

Wednesday, October 27, 2010 Tags:

Darn it, I'm innovating again.

I want the navigation in the blog and wiki to be special, yet still readable and usable for those with cookies turned off. I remain on a quest for vertical space.

This is a case for javascript. I hope.

With the proper cookie, show:

login or logout

if logged in, and an admin user, show "edit" on every page

if logged in, and not an admin user show "edit" on appropriate pages


comment - on the individual blog pages (maybe)



  • Without javascript show all these next to each other, like this:

Login Logout Help Edit RecentChanges Blog Wiki Archives Tags

  • With Javascript show the appropriate login/logout and the rest in a drop down menu

So I get three cookies from ikiwiki:


openid_provider ??

Now what? I am going to defer hacking on this for another day, but getting cookies is solved. Manipulating css is solvable - setting the display: none for sections that I don't want to display is straightforward. I'd have to modify ikiwiki.cgi to generate a cookie that makes sense for general page permissions (and enforce better security on the backend), maybe a regexp of some kind that looks similar to how ikiwiki actually does things internally...

And I want the nav stuff to be deferred until late in the page.

Wednesday, October 27, 2010 Tags:

Yesterday Craig and I upgraded to business service and a fixed IP from Comcast. It basically costs an extra 30 dollars a month and promises service no better than the residential service. The reality, at least currently, is awesome:

Download speeds went from roughly 12Mbit to 24Mbit. Uploads improved somewhat, from about 3Mbit to over 4Mbit. And the nearest 6to4 routers went from 16ms away to less than 10ms. For the first time ever, I saw an IPv6 connection to the servers I maintain at connect FASTER over IPv6 than IPv4 - ping times dropped from roughly 78ms using ipv4 to 55ms using 6in4.

Craig's aging wireless router wasn't fast enough to keep up, so I broke out the 5.8 ghz wireless-n routers I've been using, built a new version of openwrt for them (which has support for IPv6 in AP/STA mode - 300Mbit capable, in other words) and got those working, more or less. With a bunch of hand configuration, I got 24Mbit to the internet over my local wireless connection from upstairs to downstairs...

I had to turn off the firewalling features of the comcast router in order to have a reliably "up" ipv6 connection, (otherwise I would lose connectivity from the outside world inside of a minute or two of non-use) and switch to the openrd box for firewalling.

In the process of converting from my dynamic IP + 6in4 tunneling to the static IP + 6in4 tunneling I broke a couple things that I'm still in the process of tracking down. Notably internal routing broke (something is natting or babeld is acting up), as did split DNS. I basically lost connectivity from upstairs to downstairs to the Internet somehow. While annoying as hell, I didn't have time to fix it, so I also broke out ?Squid on my openrd box. After getting that working, I get about 22Mbit/sec through the proxy - not as fast as the direct connection, but enough to lower my annoyance at having a broken split DNS and default gateway to live with while I write...

I've added support for ?users pages, and started poking into what authentication and security issues exist in ikiwiki... commenting seems to work. Amusingly enough, because I hand coded my current proxy I had to switch my local name to mylaptop.local so I'd continue to be able to handle the wiki features via the web, which is already becoming addictive. And then I had to tell firefox to ignore .local urls for proxying.

I am hoping it "does the right thing" for user dirs, and commenting is robust. I'd really like to take my comments with me....


1) My babel distribution of the default route was permitted... from DHCP! I needed to enable it when configured statically, which is protocol 4


echo 1 > /proc/sys/net/ipv6/conf/all/forwarding

WANIP=$(ip -4 addr show dev eth0 | grep 'inet ' | awk '{print $2}' | cut -d/ -f1) 
 if [ -n "$WANIP" ] 
 V6PREFIX=$(printf '2002:%02x%02x:%02x%02x' $(echo $WANIP | tr . ' ')) 
 ip tunnel add tun6to4 mode sit ttl 255 remote any local $WANIP 
 ip link set tun6to4 mtu 1280 
 ip link set tun6to4 up 
 ip addr add $V6PREFIX:0::1/16 dev tun6to4 
 ip addr add $V6PREFIX:FFFF::1/64 dev eth0
 ip addr add $V6PREFIX:1::1/64 dev eth1 
 ip -6 route add ::/0 via :: dev tun6to4 proto 4 # proto 4 is STATIC
 kill -HUP $(cat /var/run/ 
 ping6 -c 2 2002:9514:3640:36:2e0:81ff:fe23:90d3 # Comcast was rejecting my first connects

exit 0

And in the babeld.conf file

redistribute ip ::/0 le 0 metric 128
redistribute local ip le 0 proto 4 metric 128
redistribute local ip ::/0 le 0 proto 4 metric 128

2) I had a typo in my internal split dns. I'm still not sure if it's doing the right thing to the outside world... basically internal ipv6 ips have to come from one port and go to another, now. I'd also not permitted connections from the new private IP's on the wireless subnets in the bind acl, so the bind server refused connections.

Wednesday, October 27, 2010 Tags:

After a long fight with css to do most of what I want for blogging, I've added into the blog format, support for the in-built wiki as well. This combination of things will let me move items from "conversational" to serious as time goes by.

I am thinking that in the end the root page of the bliki will become rather highly dynamic (and have MUCH better css) - the thus far - losing - battle with my need for vertical space is driving me bats...

I still need default templates to do what I want, and navigation to work better, but I'm getting there.

I'm also currently in a losing battle with the web caching idea. The interactive web interface REALLY wants no caching turned on somehow. Pragma nocache?

It's GREAT to be able to work in EITHER git and an editor or the web interface. Much faster turnaround times too.

Tuesday, October 26, 2010 Tags:

I was curious as to how the new, improved ikiwiki based blog would perform - 3 entries compressed down to roughly 5k.

From the internet to my openrd it was about 80s RTT.

    d@toutatis:~$ time elinks -dump > /dev/null real 0m0.547s user 0m0.260s sys 0m0.048s

Half a second via the internet. Not particularly good. From laptop to the openrd, over 54Mbit wireless:

    d@cruithne:~$ time elinks -dump > /dev/null real 0m0.102s user 0m0.080s sys 0m0.010s

1/10 second. That's a RTT time I like. Let's see how fast the wiki is running on the openrd itself:

    root@gw: # time elinks -dump http://localhost/~d/the-edge/ > /dev/null real 0m0.549s user 0m0.530s sys 0m0.030s

Obviously startup time for elinks dominates this test. How about on my dual core laptop?

    d@cruithne:~$ time elinks -dump http://localhost/~d/the-edge/ > /dev/null real 0m0.073s user 0m0.050s sys 0m0.020s

Much better! What percentage of this is elinks startup overhead is a good question. Let's see how mdns affects the same query on the laptop:

    d@cruithne:~$ time elinks -dump http://cruithne.local/~d/the-edge/ > /dev/null real 0m0.185s user 0m0.100s sys 0m0.000s

Wow, multicast DNS really slows things down to .2 seconds. I happen to have a local bind server on my network, let's see how that performs:

    d@cruithne:~$ time elinks -dump > /dev/null real 0m0.066s user 0m0.070s sys 0m0.000s

My local bind server wins big over multicast DNS!

But why does it take so long to transfer less than 6k over the internet?

Wireshark dump

3.550770 -> TCP 35972 > http [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=2973680883 TSER=0 WS=6

3.550857 -> TCP http > 35972 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=7145058 TSER=2973680883 WS=5

3.630475 -> TCP 35972 > http [ACK] Seq=1 Ack=1 Win=5888 Len=0 TSV=2973680903 TSER=7145058

3.630862 -> HTTP GET /~d/the-edge/ HTTP/1.1

3.630931 -> TCP http > 35972 [ACK] Seq=1 Ack=246 Win=6880 Len=0 TSV=7145066 TSER=2973680903

3.632001 -> TCP [TCP segment of a reassembled PDU]

3.632086 -> TCP [TCP segment of a reassembled PDU]

3.712391 -> TCP 35972 > http [ACK] Seq=246 Ack=1449 Win=8768 Len=0 TSV=2973680924 TSER=7145066

3.712466 -> HTTP HTTP/1.1 200 OK

3.712492 -> TCP 35972 > http [ACK] Seq=246 Ack=2897 Win=11648 Len=0 TSV=2973680924 TSER=7145066

3.717746 -> TCP 35972 > http [ACK] Seq=246 Ack=4345 Win=14528 Len=0 TSV=2973680925 TSER=7145066

3.806623 -> TCP 35972 > http [ACK] Seq=246 Ack=5392 Win=17472 Len=0 TSV=2973680947 TSER=7145074

3.814825 -> TCP 35972 > http [FIN, ACK] Seq=246 Ack=5392 Win=17472 Len=0 TSV=2973680949 TSER=7145074

3.814965 -> TCP http > 35972 [FIN, ACK] Seq=5392 Ack=247 Win=6880 Len=0 TSV=7145084 TSER=2973680949

3.892041 -> TCP 35972 > http [ACK] Seq=247 Ack=5393 Win=17472 Len=0 TSV=2973680969 TSER=7145084

How fast is my existing blog? Well, the main page is about 120k right now...

root@gw:~# time elinks -dump > /dev/null

real 0m2.024s

user 0m0.670s

sys 0m0.040s

The first access has a large chunk of time that I would guess is related to DNS lookup.

root@gw:~# time elinks -dump > /dev/null

real 0m1.397s

user 0m0.700s

sys 0m0.020s

root@gw:~# time elinks -dump > /dev/null

real 0m1.377s

user 0m0.670s

sys 0m0.040s

d@cruithne:~$ time elinks -dump > /dev/null

real 0m0.854s

user 0m0.120s

sys 0m0.000s

d@cruithne:~$ time elinks -dump > /dev/null

real 0m1.110s

user 0m0.070s

sys 0m0.040s

d@cruithne:~$ time elinks -dump > /dev/null

real 0m0.767s

user 0m0.090s

sys 0m0.030s

root@gw:~# traceroute

traceroute to (, 30 hops max, 60 byte packets

1 ( 10.465 ms 10.803 ms 16.763 ms

2 ( 10.381 ms 10.351 ms 10.297 ms

3 ( 11.624 ms 11.580 ms 11.524 ms

4 ( 11.274 ms 11.259 ms 11.206 ms

5 ( 11.887 ms 12.055 ms 12.016 ms

6 ( 31.318 ms 33.274 ms 27.925 ms

7 ( 28.065 ms 28.038 ms 27.986 ms

8 ( 64.349 ms 64.330 ms 68.889 ms

9 ( 70.749 ms 68.275 ms 71.075 ms

10 ( 74.269 ms 73.944 ms 75.255 ms

11 ( 82.692 ms 82.335 ms 82.403 ms

12 ( 81.917 ms ( 80.080 ms ( 80.638 ms

13 ( 81.608 ms ( 82.738 ms ( 82.206 ms

14 ( 90.480 ms * 90.573 ms

15 ( 82.892 ms 82.737 ms 82.718 ms

Tuesday, October 26, 2010 Tags: