Re: Deflation: It's time to call a spade a spade.
If we could build one processor that ran 24 times faster as economically as we could build 24 parallel processors, then the logical equivalence you note would apply.
But as Intel learned the hard way when they found it exponentially more costly to make Pentiums faster than about 3 GHz, there are practical limits, at whatever scale one is working (one CPU per cabinet, drawer, board, chip or core), to the speed that one can economically drive that CPU.
Thirty years ago, if someone pointed at a cabinet and said it was a computer, I assumed there was one processor in that cabinet. Now, I assume that there are dozens or hundreds of processors in there. The last system I worked on, before retiring from that career this month, had thousands of CPUs per cabinet.
When we (those designing computers, as I did for 30 years) hit the physics limits for practical, economical speed imposed by packaging choices at a particular scale, we are forced to start using multiple processors at that scale, to get continued computational growth.
The questions of which workloads, and which algorithms, run well on which computer architectures are then driven by the economics of computer design. If one way of packaging a gigaflop of processing is ten or a hundred times cheaper than an other way, then people will burn the midnight oil to get their workload to run on the cheaper architecture.
Whether you have 24 processors with each dedicated to a window or one dithering between the 24 very quickly doesn’t make much difference from a human productivity point of view.
But as Intel learned the hard way when they found it exponentially more costly to make Pentiums faster than about 3 GHz, there are practical limits, at whatever scale one is working (one CPU per cabinet, drawer, board, chip or core), to the speed that one can economically drive that CPU.
Thirty years ago, if someone pointed at a cabinet and said it was a computer, I assumed there was one processor in that cabinet. Now, I assume that there are dozens or hundreds of processors in there. The last system I worked on, before retiring from that career this month, had thousands of CPUs per cabinet.
When we (those designing computers, as I did for 30 years) hit the physics limits for practical, economical speed imposed by packaging choices at a particular scale, we are forced to start using multiple processors at that scale, to get continued computational growth.
The questions of which workloads, and which algorithms, run well on which computer architectures are then driven by the economics of computer design. If one way of packaging a gigaflop of processing is ten or a hundred times cheaper than an other way, then people will burn the midnight oil to get their workload to run on the cheaper architecture.
Comment