I spend most of my days up to my armpits in PC parts, writing ads for cables, cases, controller cards, and other gear that, for the most part, doesn’t exactly get my blood pumping. Every so often however, a new processor comes my way and I get a little bit excited. No other part plays such a crucial role in determining the pure uugghh! of a system than does the CPU. Because a CPU is so important, I’ve decided to create a multi-part post, explaining the terms you’ll come across while shopping for a processor and how they should affect your chip choices.
Let’s start by defining some of the two most common specs you’re likely to come across whilst shopping for a CPU.
Operating Frequency / Clock Speed
A processor’s frequency, measured in hertz (Mhz, GHz, etc.) indicates the number of clock cycles a processor can perform each second. A 3.2GHz CPU for example, could do what it does 3.2 billion times each second. When looking at this spec, a higher speed is always better, but isn’t an accurate indicator when comparing two CPUs.
It’s a lousy comparison because the ruler we’re using, hertz, only measures cycles per second, not the amount of work completed by the cycle. Bike pedals are a good example of this. If the pedals on a bike make one 360-degree rotation each second, they would be operating at 1Hz or one cycle per second. The amount of work accomplished by that 1Hz of effort could vary greatly depending on what gear the bike was in however. 1Hz in first gear would take the rider only inches, while the same cycle in fourth gear would take him or her much further.
To apply this example to processors we could replace bike gears with CPU architecture (I’ll talk more about architecture in Part 2), the factor which determines the amount of work done per cycle. Two processors using the same architecture, Intel’s i5 Sandy Bridge CPUs for example, could be compared by frequency alone, since they’re both processing instructions the exact same way. If you compared an i5 Sandy Bridge CPU with an AMD Phenom II Thuban Core CPU you’d gain very little by comparing their operating frequencies.
The core of a processor is the part of the chip that actually makes the calculations needed to keep your computer running. If you think of your processor as a DVM office, a processor’s cores are the grumpy workers behind the desks. A single-core processor would be an office with a single grumpy worker. A dual-core would be an office with two workers, a triple, three workers, a quad, four, and so on. Consider all your programs as the people waiting in line to be helped by these workers, and you can see why having multiple cores is such a big bonus. A dual-core CPU can devote one core to the lady with screaming triplets who needs register a new Semi-Truck, while saving the other core for the guy who just needs to renew his registration.
When multi-core processing was first introduced, it was seen as a trade-off of multi-tasking capacity vs. raw power, like trading a mustang in for two mini-vans. This was because many programs released back then were not equipped to divide themselves between two cores. So a dual-core CPU was great if you were running two small programs, like a web browser and a word document, but not so great if you were running a single, beefy application like computer game. The big program, unable to divide itself, would flood one of the cores, while leaving the other sitting twiddling its thumbs.
While this problem may still apply if you’re running older software on an older operating system. But if you’re dealing with mostly newer programs, which you most likely are if you’re shopping for a new CPU, your PC will have no trouble dividing the work between your cores.
Well, that’s it for part one, read part two here
If you enjoyed this article, please share it through one of these social networking buttons below. Also, remember to shop OutletPC for all your hardware needs.