I've long been a user of the editor nvi. A few years back I went on a quest to find an authoritative vi with no bloody n or m, and determined that nvi is as close as there is. Everything else is extraordinarily obsolete and/or proprietary. I'm certainly not going to moronically consider the Sun version to be authoritative, and if you do, you're a moron.
Anyways, I happened to paste (using screen) "2148401152" into nvi and it didn't do anything so, being stupid, I then hit "i" and pasted the number again. When I hit escape, it locked up. I went to another screen and saw it eating CPU time like crazy, so I thought it had crashed. But after about 20 seconds of CPU time, it stopped and I saw my mistake: it had inserted 2 billion copies of the number into the edit buffer. I don't really know if this is possible (can nvi edit files bigger than 2GB?), but it came close enough anyways. So I hit "D" to remove the redundant copies and about 15 seconds later, it had completed that operation. But it still showed the redundant copies on the screen even though it would not let me edit them and they did not go into the file when I hit ":w" (I guess it cannot edit files bigger than 2GB).
nvi is so old and simple and thoroughly tested that I'm surprised when it has a bug (even more surprised than when lynx crashes). So I told my friend about this, and he told me he switched to vim. I immediately bashed vim for having unacceptable undo behavior and he told me vim has a vi-compatible undo mode, but in fact this mode is compatible only with the SunOS vi. He observed to me that nvi's undo behavior appears to be unique. This is particularly hillarious to me because one of the features that nvi added over Sun's vi is undo history, so there was no meaningful undo behavior in Sun's vi for vim to be compatible with.
So I told him that I would never use vim because it is slower and bloated and I'll have to fight to get rid of syntax highlighting and the morons implemented undo wrong and the fact that they're morons additionally proves that their codebase has a much higher rate of bugs per line of code which, combined with the massive number of lines of code, means it has a thousand times as many bugs as nvi.
We both started nvi and vim on our respective computers and immediately observed that nvi uses approximately twice as much RAM as vim when not doing anything at all. So much for bloatware, right? Well, I wasn't surprised, I'm used to newfangled computer programs using this trick or that trick to be low in the OS memory counter. So I tried an experiment. I opened a new session in each and typed "1000000aaoeu^[", which adds 4MB of "aoeuaoeuaoeuaoeu...." to the buffer.
nvi finished that operation in about a second and allowed me to edit within that buffer with convenience. It turns out that if I had pasted any number smaller than 2 fucking billion into the repeat field, I would not have had any complaint with nvi whatsoever.
However, vim is still chunking. My 5 day old Firefox session has wasted a tremendous 20 minutes of CPU time doing basically nothing, and is the standard by which I measure bloat. No longer! vim has been trying to create this 4MB file for more than 30 minutes now!
Holy fucking cow!!
It looks like I'm going to finish my workday and suspend my laptop before I find out how many licks it takes to get to the center of the vimsie-pop.
Here's the worst case, I figure:
for i in `seq 1 1000000` ; do /bin/echo -n aoeu >> buh; done
You know, bash. It has to call fork() a million times. I can't imagine vim could possibly do any worse. Assuming it scales linearly (which is probably true about bash but does not appear to be true about vim), bash will take 115 minutes.
vim is still going after 74 minutes.
My friend did some experiments and verified that vim's execution time grows as a square of the number of times the command is repeated. It's impressive how numerically clear this exponential behavior is -- they could use vim as a (counter) example in an algorithm analysis class. And impressive that they could fuck up so severely. Anyways, it takes 2+(n/4400)^2 seconds to do n "a" operations. So n=1000000 would take 14.3 hours. I'm not going to wait for it. Well over bash fork()ing a million times! That's in-fucking-sane.
So, while it's square vs. linear, and therefore unfair to compare vim and nvi in the general case, for the specific case I have selected vim is roughly 50,000 times slower than nvi.
And before you go on about how unrealistic this test case is, let me enlighten you on the joy of a modal editor. The fact that every single command supports a repeat factor is not an accident or a useless novelty. Suppose you really did want to generate the 4MB file that I described? I've used the repeat factor many times to create test datasets. But more than that, vi is not limited to braindead operations. I perform all sorts of sed-like functions on large text files in a nvi session. But if I were to do that in vim, I'd apparently be waiting. And waiting. And waiting. Then giving up because in practice it's just not possible to wait 14 hours to process a 4MB file.
Even a 286 can do a little bit better than that.
And to top it off, it has a fucking 2 second splash screen.
Might as well use Eclipse.
Oh, and for the record, I turned off syntax highlighting and I upgraded to the newest vim to no avail. In fact, it made it somewhat slower (I had previously been using vim-tiny which is apparently the turbo version).