Those newer cards are fast enough to make the point of engine speed pointless. And besides, if you are using the same textures (a 64x64 and a 512x512) to cover the same area, the 64x64 is going to be faster.
Wrong, wrong, wrongwrongwrongwrongwrongwrong
WRONG.
First off, this mindset of "oh computers are so fast we don't have to worry about optimizing our code anymore!" is going to go down in flames once people realize
how much power is being wasted on idiotic programming and wasteful resource use. This is best explained with an example that I found in my own efforts to optimize my engine: a month ago, my engine could render 10000 batch rendered images on my desktop at about 40 FPS. A short while later, I realized that I could massively optimize my rendering pipeline using a red-black linked list binary tree, as opposed to a sorted list. After a few other optimizations, the end result was that it can now render 10000 batch rendered images at
80 FPS.
By increasing the speed and efficiency of my engine, I
doubled the speed of my rendering. Are you still telling me that engine speed is pointless?
Furthermore, smaller textures do not equate faster drawing. Almost the only thing that matters is
how many pixels you are drawing to. If you have a 64x64 writing to the entire screen instead of a 256x256 writing to the entire screen, you have the same number of texture lookups going through a filtering process, which means your only benefit is memory: the lookups will be slightly faster for 64x64 because there's less memory to worry about.
This also,means that Flipside is somewhat mistaken: Large texture sizes are
extremely dangerous. Case in point: If you put a 1024x1024 on my laptop, performance drops a significant amount. If you try putting a 2048x2048, it suffers catastrophic performance loss. Anything larger and it can't support it. However, render a 1024x1024 as 4 512x512 chunks, and you'll get a 20-30% performance increase. This same effect can be observed on my desktop, with its newer graphics card, at double that resolution - put in a 4096x4096 and it will start to suffocate. Do the same thing with 1024x1024 chunks and you'll get massive performance increases (in this case the textures have to be scaled back in size so they are all still rendering, otherwise the performance increase is due to culling. Note that the comparison here is between a 25% size 2048x2048 with 25% sized 1024x1024 chunks).
I'm not saying it does, but when someone starts saying engine X is more awesome than engine Y while engine X hasn't been used to the extent engine Y has, it irks me somewhat.
Windows is used everywhere. Do you think windows is awesome? Linux is only used by 2% of the computer population. That must mean it sucks, right? I don't think i need to really keep going here to point out the glaring flaw in your argument. Take Flash - Flash is a horrible monstrosity of bloated crapware that is so prevalent across the web because its only competitor is Microsoft's pathetic "silverlight" initiative. If you don't believe me that flash is crap, it takes the same amount of resources to run unoptimized flash games as it does to run Crysis. If you still don't believe me, flash once managed to crash my friend's computer and corrupt its own update file so that the system BSOD's every time he logged in. He had to go into
safe mode and uninstall all adobe-related software just to get the thing functional again. This is the same program that is used on 99% of the websites across the web. Are you still, honest to god, telling me that just because X is used more then Y that Y can't be better then X?
Irrlicht's engine core is the best core I've ever seen. Just because its not focused on making everything pretty for the general public idiots doesn't mean its a piece of crap, it just means it has a different focus - speed. I'm sorry, but not everyone on the planet wants to buy a gaming rig that's the same price as a used car when modern graphics shouldn't be so goddamn bloated in the first place.