From the Dev Notes of KSP's developers, I know they handle it with a Quad sphere (that is, a cube that has been smoothed out into a sphere), and then plays with the subdivision level (the closer to the camera, the more subdivisions, so the more polygons). The result is that you get detailed results up close, but still save thousands of polygons in the distance.
But that works best with terrain. And while the beta has it, the "regular" stable version is still plain sphere. And I believe it still works the same way: the closer, the more polygons.
I think for simplicity, you have a rectangle per surface tile on the planet mesh. And then the transforming is the same.
Though for Orbiter, such heavy optimizations are not needed. They are welcomed, sure but, 99% of the time the only meshes seen are the planet and your ship, which is at most 20k polys (Dansteph's Arrow Freighter is about that number of polygons). So even if the planet mesh is 100k polys at most, GPU doesn't suffer at all (especially if using D3D9 client, which in itself is a huge optimization improvement over D3D7), because mid-range GPUs are capable of showing 500k polys with a 60Hz refresh rate.