No, I'm talking about the era in which each platform was born, and what input mechanisms were a priority in regards to the design of the OS. The initial Android project started back when the common smartphone/PDA interface was either stylus driven, or some sort of trackball/cursor based navigation.
Android was following those examples for the early designs. The OS was then adopted to a more finger touch friendly setup. Google made it work well enough to keep many folks happy with their phones. And Microsoft seems to be on a similar path in the tablet space, taking their much older Windows core built around keyboard and mice (and later styluses via the Tablet PC projects), and adopting it to finger touch for the more modern tablet. It can work, but it's the difference between reworking part of the OS for a new interface, vs being designed day one for the interface.
Another parallel would be early Windows (non NT) vs Mac OS (non OS X). Windows was just a GUI shell program bolted on top of the DOS OS. Mac OS was from the ground up a GUI OS. During boot with Windows, you got to watch DOS boot first in a text mode console. And if Windows failed to start, it dumped errors into the text console. Mac OS booted always in a GUI, and always presented GUI errors if something went wrong during boot. (This is meant as a high level example view, vs a low level discussion about boot firmware, OS kernels, GUI windowing systems, etc)
This article from a former Google intern helps talk about this in much more detail:
https://plus.google.com/100838276097451809262/posts/VDkV9XaJRGS