What's new

Obsolete before old: What’s the lifespan of your laptop?

I used my dekstop ~7 years before purchasing a laptop few months ago.... i am not electronic geek all work well for me till it really starts pissing off ............so no worries here ...but yes as the new tech is coming up with wide improvements over the previous so that may have started to create a concern for people like me ;)
 
.
Nope infact the earlier celerons at 65nm could exceed ~8GHz on LN2 but nowadays IVB won't cross ~7Ghz on 22nm with the same cooling ! Actually the lesser the number of transistors the higher they can be clocked(more raw perf) know why cause more transistors means greater heat so less is the frequency achieved hence the need for multicore proc to be clocked lower vs single core models of yester years ! Infact placement of transistors on a chip is even more imp to prevent hotspot formation that reduce the overall perf of the chip !

We both agreed that heat dissipation is a major factor in chip design, so no argument there. My point was that raw processor performance is nowhere near the limiting factor in modern computing and the way to extract performance is to use the chip real estate on multiple cores instead. The hardware designers realized this some time ago; the issues with hotspots only strengthened the argument.

You are partly right with the process node part but microach refining plays a greater role here for instance the latest IVB is at best ~20% faster than the previous gen SNB with the same microarch coming from 32nm fab to 22nm trigate :pop:

You can push the firmware till kingdom come, it doesn't have a ghost of a prayer given RAM latencies. A single processor will only do so much useful work; the only path forward is multicore and beyond.
 
.
I will post a documentary on Planned obsolescence in the video section ...do watch it guys....its a great documentary.
 
.
We both agreed that heat dissipation is a major factor in chip design, so no argument there. My point was that raw processor performance is nowhere near the limiting factor in modern computing and the way to extract performance is to use the chip real estate on multiple cores instead. The hardware designers realized this some time ago; the issues with hotspots only strengthened the argument.



You can push the firmware till kingdom come, it doesn't have a ghost of a prayer given RAM latencies. A single processor will only do so much useful work; the only path forward is multicore and beyond.
My only qualms with this are ~

1) It has been proven(by Intel) that more than 8 cores on a desktop is wasteful as adding cores gives diminishing returns & infact the performance gain is offset by greater heat/reduced efficiency not to mention the fact that no consumer level apps can scale to eight cores as of now !

2) With shrinking nodes(possible till 1nm only) we have a massive issue of leakage currents & since I'm aware of the issues most semiconductor manufacturers face currently the lithographic process for etching silicon is gonna be even less efficient(in terms of computational gains) going forward !

In other words we have ~20yrs at most before we'll have to move from Si based semiconductor industry to perhaps Graphite & the shrinking process nodes won't help much cause we'll have to abandon the current lithographic semiconductor silicon etching technology sooner rather than later ! Yeah I know I'm being way too @n@! about this but hey that's an Indian for you, so who says that we're any less innovative/informative than the CN are :smokin:
 
.
1) It has been proven(by Intel) that more than 8 cores on a desktop is wasteful as adding cores gives diminishing returns & infact the performance gain is offset by greater heat/reduced efficiency not to mention the fact that no consumer level apps can scale to eight cores as of now !

I am surprised the gains go beyond 3 or 4 cores, let alone 8.

As you said, most software out there is written for serial execution. The only parallelism which might be achieved on a typical consumer machine would be the multithreaded opsys on one (or two) dedicated cores, and application level programs on the remaining cores. Since most people don't run more than a couple of apps at a time, that would exhaust the parallelism potential right there.

2) With shrinking nodes(possible till 1nm only) we have a massive issue of leakage currents & since I'm aware of the issues most semiconductor manufacturers face currently the lithographic process for etching silicon is gonna be even less efficient(in terms of computational gains) going forward !

In other words we have ~20yrs at most before we'll have to move from Si based semiconductor industry to perhaps Graphite & the shrinking process nodes won't help much cause we'll have to abandon the current lithographic semiconductor silicon etching technology sooner rather than later ! Yeah I know I'm being way too @n@! about this but hey that's an Indian for you, so who says that we're any less innovative/informative than the CN are :smokin:

I am afraid a dirty little secret of computing is about to be exposed: Except for certain specialized communities who have the skills and experience to write parallel software (e.g. in the scientific, media, gaming and financial service sectors, as well as low level system software like operating systems and databases), the fact is that the vast majority of software professionals haven't a clue how to write proper parallel software.

The software industry is woefully unprepared for the coming demand software that exploits the hardware parallelism, and the hardware will simply go to waste in the vast majority of cases. Will the specialist markets be enough to keep the hardware makers motivated if the wider commercial market has no use for additional processing power?
 
.
Mine in 2 years old. May be another 2 years of life left
 
. .
Back
Top Bottom