Archive | Uncategorized RSS feed for this section

Speaking at Software Industry Conference

I’m currently in Dallas at the Software Industry Conference, where I’ll be giving a presentation about SEO strategies on Saturday.  In the meanwhile, if you’re at the conference or feel like coming out to the Hyatt Regency, feel free to get in touch with me.

As you have probably guessed, I’ll be posting the presentation and some textual elaboration on it right after I finish delivering it.  (I don’t know if video will be available this time.)

Running Apache On A Memory-Constrained VPS

Yesterday about a hundred thousand people visited this blog due to my post on names, and the server it was on died several fiery deaths. This has been a persistent issue for me in dealing with Apache (the site dies nearly every time I get Reddited — with only about 10,000 visitors each time, which shouldn’t be a big number on the Internet), but no amount of enabling WordPress cache plugins, tweaking my Apache settings, upgrading the VPS’ RAM, or Googling lead me to a solution.

However, necessity is the mother of invention, and I finally figured out what was up yesterday. The culprit: KeepAlive.

Setting up and tearing down HTTP connections is expensive for both servers and clients, so Apache keeps connections open for a configurable amount of time after it has finished a request.  This is an extraordinarily sensible default, since the vast majority of HTTP requests will be followed by another HTTP request — fetch dynamically generated HTML, then start fetching linked static assets like stylesheets and images, etc.  Look, of 43 requests, 42 were not the last request in a 3 second interval.  It is a huge throughput win.  However, if you’re running a memory constrained VPS and get hit by a huge wave of traffic, KeepAlive will kill you.

When I started getting hit by the wave yesterday, I had 512MB of RAM and a cap (ServerLimit = MaxClients) of 20 worker processes to deal with them.  Each worker was capable of processing a request in a fifth of a second, because everything was cached.  This implies that my throughput should have been close to 20 * 60 * 5 = 60k satisfied clients a minute, enough to withstand even a mighty slashdotting.  (That is a bit of an overestimation, since there were also static assets being requested with each hit, but to fix an earlier Reddit attack I had manually hacked the heck out of my WordPress theme to load static assets from Bingo Card Creator’s Nginx, because there seems to be no power on Earth or under it that can take Nginx down.)

However, I had KeepAlive on, set to three seconds.  This meant that for every 250ms of a worker streaming cached content to a client, it spent 3 seconds sucking its thumb waiting for that client to come back and ask for something else.  In the meantime, other clients were stacking up like planes over O’Hare.  The first twenty clients get in and, from the perspective of every other client, the site totally dies for three seconds.  Then the next twenty clients get served, and the site continues to be dead for everybody else.  Cycle, rinse, repeat.  The worst part was people were joining the queue faster than their clients were either getting handled or timeouted, so it was essentially a denial of service attack caused by the default settings.  The throughput of the server went from about 60k requests per second to about 380 requests per second.  380 is, well, not quite enough.

Thus the solution: turning KeepAlive off.  This caused CPU usage to spike quite a bit, but since the caching plugin was working, it immediately alleviated all of the user-visible problems.  Bingo, done.

Since I tried about a dozen things prior to hitting on this, I thought I’d quick write them down in case you are an unlucky sod Googling for Apache settings for your VPS, possibly Ubuntu Apache settings, or that sort of thing:

  • Increase VPS RAM: Not really worth doing unless you’re on 256MB.  Apache should be able to handle the load with 20 processes.
  • Am I using pre-fork Apache or the worker MPM? If  you’re on Ubuntu, you’re probably using the pre-fork Apache.  MPM settings will be totally ignored.  You can check this by running apache2 -l .  (This is chosen at compile time and can’t be altered via the config files, so if — like me — you just apt-get your way around getting common programs installed, you’re likely stuck.)
  • What should my pre-fork settings be then?

Assuming 512 MB of RAM and you are only running Apache and MySQL on the box:

<IfModule mpm_prefork_module>
StartServers          2
MinSpareServers       2
MaxSpareServers      5
ServerLimit          20
MaxClients           20
MaxRequestsPerChild  10000
</IfModule>
You can bump ServerLimit and MaxClients to 48 or so if you have 1GB of RAM.  Note that this assumes you’re using a fairly typical WordPress installation, and you’ve tried to optimize Apache’s memory usage.  If you see your VPS swapping, move those numbers down (and restart Apache) until you see it stop swapping.  Apache being inaccessible is bad, swapping might slow your server down bad enough to kill even your SSH connection, and then you’ll have to reboot and pray you can get in fast enough to tweak settings before it happens again.
  • How do I tweak Apache’s memory usage? Turn off modules you don’t need.  Go to /etc/apache2/mods-enabled.  Take note of how many things there are that you’re not using.  Run sudo a2dismod (name of module) for them, then restart Apache.  This literally halved my per-process memory consumption last night, which let me run twice as many processes.  (That still won’t help you if KeepAlive is on, but it could majorly increase responsiveness if you’ve eliminated that bottleneck.)  Good choices for disabling are, probably, everything that starts with dav, everything that starts with auth (unless you’re securing wp-admin at the server layer — in that case, enable only the module you need for that), and userdir.
  • What cache to use? WordPress Super Cache.  Installs quickly (follow the directions to the letter, especially regarding permissions), works great.  Don’t try to survive a Slashdotting without it.
  • Any other tips?  Serve static files through Nginx.  Find a Rails developer to explain it to you if you haven’t done it before — it is easier than you’d think and will take major load off your server (Apache only serves like 3 requests of the 43 required to load a typical page on my site — and two of those are due to a plugin that I can’t be bothered to patch).
  • My server is slammed and I can’t get into the WordPress admin to enable the caching plugin I just installed:  Make sure Apache’s KeepAlive is off.   Change your permit directive in the Apache configuration to

<Directory /var/www/blog-directory-getting-slammed-goes-here>

Options FollowSymLinks

AllowOverride All

Order deny,allow

Deny from all

Allow from <your IP address goes here>

</Directory>

This will have Apache just deny requests from clients other than yourself (although Apache will keep the connection open if you’re using KeepAlive, which won’t due you a lick of good since it will still hold the line open so that it can deny their next request promptly — don’t use KeepAlive).  That should let you get into the WordPress admin to enable and test caching.  After doing so, you can switch to Allow from All and then test to see if your site is now surviving.

Sidenote: If you can possibly help it, I recommend Nginx over Apache.  I use Apache because a couple of years ago it was not simple to use Nginx with PHP.  This is no longer the case.  The default settings (or whatever  you’ve copied from the My First Rails Nginx Configuration you just Googled) are much more forgiving than Apache’s defaults.  It is extraordinarily difficult to kill Nginx unless you set out to do so.  Apache.conf, on the other hand, is a whole mess of black magic with subtle interactions that will kill you under plausible deployment scenarios, and the official documentation has copious explanations of What the settings do and almost nothing regarding Why or How you should configure them.

Hopefully, this will save you, brave Googling blog owner from the future, from having to figure this out by trial and error while your server is down.  Godspeed.

Falsehoods Programmers Believe About Names

[This post has been translated into Japanese by one of our readers: 和訳もあります。]

John Graham-Cumming wrote an article today complaining about how a computer system he was working with described his last name as having invalid characters.  It of course does not, because anything someone tells you is their name is — by definition — an appropriate identifier for them.  John was understandably vexed about this situation, and he has every right to be, because names are central to our identities, virtually by definition.

I have lived in Japan for several years, programming in a professional capacity, and I have broken many systems by the simple expedient of being introduced into them.  (Most people call me Patrick McKenzie, but I’ll acknowledge as correct any of six different “full” names, any many systems I deal with will accept precisely none of them.) Similarly, I’ve worked with Big Freaking Enterprises which, by dint of doing business globally, have theoretically designed their systems to allow all names to work in them.  I have never seen a computer system which handles names properly and doubt one exists, anywhere.

So, as a public service, I’m going to list assumptions your systems probably make about names.  All of these assumptions are wrong.  Try to make less of them next time you write a system which touches names.

  1. People have exactly one canonical full name.
  2. People have exactly one full name which they go by.
  3. People have, at this point in time, exactly one canonical full name.
  4. People have, at this point in time, one full name which they go by.
  5. People have exactly N names, for any value of N.
  6. People’s names fit within a certain defined amount of space.
  7. People’s names do not change.
  8. People’s names change, but only at a certain enumerated set of events.
  9. People’s names are written in ASCII.
  10. People’s names are written in any single character set.
  11. People’s names are all mapped in Unicode code points.
  12. People’s names are case sensitive.
  13. People’s names are case insensitive.
  14. People’s names sometimes have prefixes or suffixes, but you can safely ignore those.
  15. People’s names do not contain numbers.
  16. People’s names are not written in ALL CAPS.
  17. People’s names are not written in all lower case letters.
  18. People’s names have an order to them.  Picking any ordering scheme will automatically result in consistent ordering among all systems, as long as both use the same ordering scheme for the same name.
  19. People’s first names and last names are, by necessity, different.
  20. People have last names, family names, or anything else which is shared by folks recognized as their relatives.
  21. People’s names are globally unique.
  22. People’s names are almost globally unique.
  23. Alright alright but surely people’s names are diverse enough such that no million people share the same name.
  24. My system will never have to deal with names from China.
  25. Or Japan.
  26. Or Korea.
  27. Or Ireland, the United Kingdom, the United States, Spain, Mexico, Brazil, Peru, Russia, Sweden, Botswana, South Africa, Trinidad, Haiti, France, or the Klingon Empire, all of which have “weird” naming schemes in common use.
  28. That Klingon Empire thing was a joke, right?
  29. Confound your cultural relativism!  People in my society, at least, agree on one commonly accepted standard for names.
  30. There exists an algorithm which transforms names and can be reversed losslessly.  (Yes, yes, you can do it if your algorithm returns the input.  You get a gold star.)
  31. I can safely assume that this dictionary of bad words contains no people’s names in it.
  32. People’s names are assigned at birth.
  33. OK, maybe not at birth, but at least pretty close to birth.
  34. Alright, alright, within a year or so of birth.
  35. Five years?
  36. You’re kidding me, right?
  37. Two different systems containing data about the same person will use the same name for that person.
  38. Two different data entry operators, given a person’s name, will by necessity enter bitwise equivalent strings on any single system, if the system is well-designed.
  39. People whose names break my system are weird outliers.  They should have had solid, acceptable names, like 田中太郎.
  40. People have names.

This list is by no means exhaustive.  If you need examples of real names which disprove any of the above commonly held misconceptions, I will happily introduce you to several.  Feel free to add other misconceptions in the comments, and refer people to this post the next time they suggest a genius idea like a database table with a first_name and last_name column.

Detecting Bots with Javascript for Better A/B Test Results

I am a big believer in not spending time creating features until you know customers actually need them.  This goes the same for OSS projects: there is no point in overly complicating things until “customers” tell you they need to be a little more complicated.  (Helpfully, here some customers are actually capable of helping themselves… well, OK, it is theoretically possible at any rate.)

Some months ago, one of my “customers” for A/Bingo (my OSS Rails A/B testing library) told me that it needed to exclude bots from the counts.  At the time, all of my A/B tests were behind signup screens, so essentially no bots were executing them.  I considered the matter, and thought “Well, since bots aren’t intelligent enough to skew A/B test results, they’ll be distributed evenly over all the items being tested, and since A/B tests measure for difference in conversion rates rather than measuring absolute conversion rates, that should come out in the wash.”  I told him that.  He was less than happy about that answer, so I gave him my stock answer for folks who disagree with me on OSS design directions: it is MIT licensed, so you can fork it and code the feature yourself.  If you are too busy to code it, that is fine, I am available for consulting.

This issue has come up a few times, but nobody was sufficiently motivated about it to pay my consulting fee (I love when the market gives me exactly what I want), so I put it out of my mind.  However, I’ve recently been doing a spate of run-of-site A/B tests with the conversion being a purchase, and here the bots really are killers.

For example, let’s say that in the status quo I get about 2k visits a day and 5 sales, which are not atypical numbers for summer.  To discriminate between that and a conversion rate 25% higher, I’d need about 56k visits, or a month of data, to hit the 95% confidence interval.  Great.  The only problem is that A/Bingo doesn’t record 2k visits a day.  It records closer to 8k visits a day, because my site gets slammed by bots quite frequently.  This decreases my measured conversion rate from .25% to .0625%.  (If these numbers sound low, keep in mind that we’re in the offseason for my market, and that my site ranks for all manner of longtail search terms due to the amount of content I put out.  Many of my visitors are not really prospects.)

Does This Matter?

I still think that, theoretically speaking, since bots aren’t intelligent enough to convert at different rates over the alternatives, the A/B testing confidence math works out pretty much identically.  Here’s the formula for Z statistic which I use for testing:

The CR stands for Conversion Rate and n stands for sample size, for the two alternatives used.  If we increase the sample sizes by some constant factor X, we would expect the equation to turn into:

We can factor out 1/X from the numerator and bring it to the denominator (by inverting it).  Yay, grade school.

Now, by the magic of high school algebra:

If I screw this up the math team is *so* disowning me:

Now, if you look carefully at that, it is not the same equation as we started with.  How did it change?  Well, the reciprocal of the conversion rate (1 – cr) got closer to 1 than it was previously.  (You can verify this by taking the limit as X approaches infinity.)  Getting closer to 1 means the numerators of the denominator get bigger, which means the denominator as a whole gets modestly bigger, which means the Z score gets modestly smaller, which could possibly hurt the calculation we’re making.

So, assuming I worked my algebra right here, the intuitive answer that I have been giving people for months is wrong: bots do bork statistical significance testing, by artificially depressing z scores and thus turning statistically significant results into null results at the margin.

So what can we do about it?

The Naive Approach

You might think you can catch most bots with a simple User-Agent check.  I thought that, too.  As it turns out, that is catastrophically wrong, at least for the bot population that I deal with.  (Note that since keyword searches would suggest that my site is in the gambling industry, I get a lot of unwanted attention from scrapers.)  It barely got rid of half of the bots.

The More Robust Approach

One way we could try restricting bots is with a CAPCHA, but it is a very bad idea to force all users to prove that they are human just so that you can A/B test them.  We need something that is totally automated which is difficult for bots to do.

Happily, there is an answer for that: arbitrary Javascript execution.  While Googlebot (+) and a (very) few other cutting edge bots can execute Javascript, doing it on web scales is very resource intensive, and also requires substantially more skill for the bot-maker than scripting wget or your HTTP library of choice.

+ What, you didn’t know that Googlebot could execute Javascript?  You need to make more friends with technically inclined SEOs.  They do partial full evaluation (i.e. executing all of the Javascript on a page, just like a human would) and partial evaluation by heuristics (i.e. grep through the code and make guesses without actually executing it).  You can verify full evaluation by taking the method discussed in this blog post and tweaking it a little bit to use GETs rather than POSTs, then waiting for Googlebot to show up in your access logs for the forbidden URL.  (Seeing the heuristic approach is easier — put a URL in syntactically live but logically dead code in Javascript, and watch it get crawled.)

To maximize the number of bots we catch (and hopefully restrict it to Googlebot, who almost always correctly reports its user agent), we’re going to require the agent to perform three tasks:

  1. Add two random numbers together.  (Easy if you have JS.)
  2. Execute an AJAX request via Prototype or JQuery.  (Loading those libraries is, hah, “fairly challenging” to do without actually evaluating them.)
  3. Execute a POST.  (Googlebot should not POST.  It will do all sorts of things for GETs, though, including guessing query parameters that will likely let it crawl more of your site.  A topic for another day.)

This is fairly little code.  Here is the Prototype example


  var a=Math.floor(Math.random()*11);
  var b=Math.floor(Math.random()*11);
  var x=new Ajax.Request('/some-url', {parameters:{a: a, b: b, c: a+b}})

and in JQuery:


  var a=Math.floor(Math.random()*11);
  var b=Math.floor(Math.random()*11);
  var x=jQuery.post('/some-url', {a: a, b: b, c: a+b});

Now, server side, we take the parameters a, b, and c, and we see if they form a valid triplet.  If so, we conclude they are human. If not, we leave continue to assume that they’re probably a bot.

Note that I could have been a bit harsher on the maybe-bot and given them a problem which trusts them less: for example, calculate the MD5 of a value that I randomly picked and stuffed in the session, so that I could reject bots which hypothetically tried to replay previous answers, or bots hand-coded to “knock” on a=0, b=0, c=0 prior to accessing the rest of my site.  However, I’m really not that picky: this isn’t to keep a dedicated adversary out, it is to distinguish the overwhelming majority of bots from humans. (Besides, nobody gains from screwing up my A/B tests, so I don’t expect there to be dedicated adversaries. This isn’t a security feature.)

You might have noticed that I assume humans can run Javascript.  (My site breaks early and often without it.)  While it is not specifically designed that Richard Stallman and folks running NoScript can’t influence my future development directions, I am not overwrought with grief at that coincidence.

Tying It Together

So now we can detect who can and who cannot execute Javascript, but there is one more little detail: we learn about your ability to execute Javascript potentially after you’ve started an A/B test.  For example, it is quite possible (likely, in fact) that the first page you execute has an A/B test in it somewhere, and that you’ll make an AJAX call from that page you register your humanness after we have already counted (or not counted) your participation in the A/B test.

This has a really simple fix.  A/Bingo already tracks which tests you’ve previously participated in, to avoid double-counting.  In “discriminate against bots” mode, it tracks your participation (and conversions) but does not add them to the totals immediately unless you’ve previously proven yourself to be a human.  When you’re first marked as a human, it takes a look at the tests you’ve previously participated in (prior to turning human), and scores your participation for them after the fact.  Your subsequent tests will be scored immediately, because you’re now known to be human.

Folks who are interested in seeing the specifics of the ballet between the Javascript and server-side implementation can, of course, peruse the code at their leisure by git-ing it from the official site.  If you couldn’t care less about implementation details but want your A/B tests to be bot-proof ASAP, see the last entry in the FAQ for how to turn this on.

Other Applications

You could potentially use this in a variety of contexts:

1) With a little work, it is a no interaction required CAPCHA for blog commenting and similar applications. Let all users, known-human and otherwise, immediately see their comments posted, but delay public posting of the comments until you have received the proof of Javascript execution from that user. (You’ll want to use slightly trickier Javascript, probably requiring state on your server as well.) Note that this will mean your site will be forever without the light of Richard Stallman’s comments.

2) Do user discrimination passively all the time. When your server hits high load, turn off “expensive” features for users who are not yet known to be human. This will stop performance issues caused by rogue bots gone wild, and also give you quite a bit of leeway at peak load, since bots are the majority of user agents. (I suppose you could block bots entirely during high load.)

3) Block bots from destructive actions, though you should be doing that anyway (by putting destructive actions behind a POST and authentication if there is any negative consequence to the destruction).

Interviewed by Andew Warner On Entrepreneurship [Video]

The interviewed I mentioned earlier got rescheduled due to technical difficulties, but it is now up on Mixergy’s site.  You can see it here.

Topics include:

  • Why would teachers want to play bingo anyhow?
  • How did you pull this off while full-time employed?
  • What is it like being a Japanese salaryman?
  • What is the next product?  (Spoiler: Not telling you yet, come back in May.)
  • How did you get traction early at the start?
  • How do you make your processes more reliable to maximize on the effectiveness of your time?

I’m pretty happy with how it came out, although given that it was about 2 in the morning when I recorded it due to time zone differences, sometimes my ability to speak in coherent sentences leaves a bit to be desired.  If you have any questions, feel free to comment here or there.

Peldi from Balsamiq Interviewed For An Hour

Peldi from Balsamiq, who is hugely inspiring to the rest of the uISV community and myself, was interviewed for over an hour earlier this week on Mixergy.  Go watch it.  Everything he says about customer service, building remarkable products, early marketing (his post on the subject contains some of the best advice I’ve ever read), and competition just knocks it out of the park.

For folks here who have been reading me for a while but do not know about Mixergy yet: Andrew Warner does interviews with successful Internet business folks.  Most of them are inspiring, and many have killer, actionable tips that you can use in your businesses.  (I particularly like the one with the Wufoo guys, Peldi’s, and this one by Hiten Shah of Kissmetrics and, earlier, CrazyEgg, which I’ve mentioned a time or three here.)

Andrew interviewed me earlier, too.  The interview and transcript will be up one of these days, after the editors have made me sound intelligible.  (It is amazing what you can do with computers!)

Data Driven Software Design Presentation (plus bonus interview)

Last week I went down to Osaka to give a presentation to the Design Matters group at the Apple Store.  I originally prepared a very geeky software-centric dive into the magic of using statistics to improve your software, but I was informed that the audience wouldn’t be as geeky as I had expected, so with great help from Andreas and company I retooled the presentation into something less technical and more interesting on the same topic.  I don’t believe it was videotaped, but you can see my presentation and notes on Data-driven Software Design below:

Data-Driven Software Design

(Incidentally, that Slideshare widget is great SEO now isn’t it.  I’m leaving their links attached out of sheer amusement.)
After the presentation, I met with some folks from MessaLiberty, one of the most impressive companies I’ve seen in Japan.  They do lots of WordPress/website consulting and are coming out with a recommendation engine product one of these days — all with a team of about seven young engineers working sane hours.  Ah, there is hope for the future yet.
Anyhow, they asked if they could interview me for their video blog.  You can see the interview in English and, in the near future (after they get done editing it) in Japanese.  Topics include a brief overview of the above presentation, when you should start A/B testing versus when to redirect your efforts elsewhere, and my advice for getting a job in Japan (spoiler: learn Japanese).

Quick Start For Rails on Windows Seven

Today I killed a few hours getting my Rails environment working on my brand new shiny 64 bit Windows Seven laptop.  These instructions should also work with Windows Vista.  I’m assuming you’re a fairly  experienced Rails developer and just ended in dependency purgatory like I did for the last few hours.

1.  Grab the MySQL developer version for your architecture (32 bit or 64 bit as appropriate) here.

2.  Grab Ruby here.  I used the 1.8.6 RC2 installer for my 64 bit architecture.

3.  Add C:\Ruby\bin to your path.  You can do this on Windows by opening the Start Menu, right clicking My Computer, clicking Properties, clicking Advanced / System Settings, and then adding it to the end of the PATH variable on the lower of the two dialogs.  Apologies for inexact setting names, my computer is Japanese so I’m working from memory.

4.  Verify that your path includes C:\Ruby\bin by opening a new command line and executing “path”.

5.  Good to go?  OK, execute:

gem install --no-rdoc --no-ri rails
gem install mysql

You’ll get all manner of errors on that MySQL installation. That is OK.

6. Here’s the magic: copy libmySQL.dll from here to C:\Ruby\bin . If you do not do this, you will get ugly errors on Rails startup about not being able to load mysql_api.so.

You should now be able to successfully work with Rails as you have been previously, even from your Windows machine, and you will amaze your Mac-wielding friends.

Getting Interviewed By Andrew Warner at Mixergy

Andrew Warner of Mixergy will be interviewing me at 11 AM Pacific tomorrow, which is something like 14 hours from the timestamp on this post.  If it is 11 ~ 12 AM Pacific, you can catch the live interview and participate in a chatroom.  I’m told the main theme for the interview will be a business biography, so my regular readers are likely going to hear a lot of things you already know (“It makes bingo cards!  Wow, fancy that.”), but Andrew has a way of wheedling secrets out of people so I’m sure you’ll still enjoy it.

If you have any subject you’d particularly like to hear about, please post it in the comments and I’ll tell Andrew to ask about it.

Interviewed by Gabriel Weinberg [video]

Gabriel Weinberg, the entrepreneur behind the search engine Duck Duck Go interviewed me earlier today for his upcoming book on getting traction.  The video, which runs about an hour in length, is available here.

I always look for a summary of contents prior to committing myself to a video (since they’re so much longer than reading a post), so here you go:

  • An outline of Bingo Card Creator’s SEO strategy including…
  • using mini-sites
  • using widgets
  • using scalable content generation
  • My thoughts on conversion optimization
  • A/B testing
  • The multiplicative effect of improvements in your funnels.
  • A wee bit of “How do I do it all?” while previously being employed (outsource, automate, eliminate).
  • “How’d you end up in Japan, anyhow?”

In fact, I was so convinced that I’d rather read videos than watch them 99% of the time that I took the liberty of transcribing it, with Gabe’s permission.

Some links to things mentioned in the interview:

  • The Conversion Optimizer case study Google wrote about me.
  • A/Bingo, my OSS Rails A/B testing library.
  • Hacker News and the Business of Software boards, who are the smartest minds about online software businesses anywhere, and keep me sane.  (I went down to City Hall today and filled out a bunch of paperwork, and the clerk’s response on hearing I was a software developer was “Web applications?  Wow, you’re an iPhone developer?!”  sigh It is nice to have people who speak your language, and I don’t mean English.)
  • SEOMoz and SEOBook.  (P.S. I’m not sure if I adequately communicated this when speaking: both are great and I recommend them.)

In somewhat related news, I have an interview scheduled with Andrew Warner of Mixergy.com for April 30th at 11:00 AM Pacific time.  Andrew tends to do his interviews live, so if you have any questions you want to ask, be sure to tune in to the live chat.  Andrew has told me that he hopes to focus on my business biography, so I assume there will be less technical/marketing/SEO content and more storytelling — it should be fun.

Speaking of which, it looks like he has Peldi from Balsamiq booked for April 28th.  I highly recommend all the uISVs in the audience watch that one — Peldi is near the top of our profession in every way, and quite generous with his insights.

I’m absolutely floored that I’m appearing on guest lists next to folks like the 37Signals crew or Eric Ries, who rank among some of the largest influences in how I run my business.  Crikey.  It is an honor.