You don't seem to understand the connection between the two. Smaller components, placed closer together, result in reduced latencies, allowing the clock rate to be increased. Increased clock rate translates into more instructions per second, which is measured as computing power.
The heat signatures and looming quantum effects are a problem no matter how the components are arranged on a chip. They are a problem both for faster cores and more cores. That is a separate problem in itself and is not the issue here.
My comment was specifically why hardware manufacturers are focusing on adding cores rather than increasing clock rates. The modern processor is plenty fast enough in real life situations, since the bottleneck is RAM speed. If RAM speeds were to increase significantly, then the balance might shift again. Until then, the path to superior performance lies with exploiting parallelism rather than making the uniprocessor faster,
Uniprocessors were left in the dust decades ago in the supercomputer race. All modern supercomputers are massively parallel. Granted, the parallelism is off-chip, but many of the same arguments for maximizing performance apply onto the chip as well.
Duh!
It is precisely this refinement of "microarchitecture" which favors using on-chip real estate to add cores rather than speed up the processor. Even with the evolving and improving cache-coherence schemes, pipelining and branch-prediction strategies, a cache miss is a disaster for a modern processor. And cache misses WILL happen. That is a fact of life and that's where the bottleneck of RAM speed kicks in. Unless RAM speeds catch up to processor speeds any time soon, which is extremely unlikely, the only alternative is to have other cores standing by doing useful work.