|
To operate on Google’s scale requires the company to treat each machine as expendable. Server makers pride themselves on their high-end machines’ ability to withstand failures, but Google prefers to invest its money in fault-tolerant software.
“Our view is it’s better to have twice as much hardware that’s not as reliable than half as much that’s more reliable,” Dean said. “You have to provide reliability on a software level. If you’re running 10,000 machines, something is going to die every day.”
[…]
While Google uses ordinary hardware components for its servers, it doesn’t use conventional packaging. Google required Intel to create custom circuit boards. And, Dean said, the company currently puts a case around each 40-server rack, an in-house design, rather than using the conventional case around each server.
[…] As to the servers themselves, Google likes multicore chips, those with many processing engines on each slice of silicon. Many software companies, accustomed to better performance from ever-faster chip clock speeds, are struggling to adapt to the multicore approach, but it suits Google just fine. […] “We really, really like multicore machines,” Dean said.
[…]
Dean described three core elements of Google’s software: GFS, the Google File System, BigTable, and the MapReduce algorithm.
[…]
On any given day, Google runs about 100,000 MapReduce jobs; each occupies about 400 servers and takes about 5 to 10 minutes to finish, Dean said.That’s a basis for some interesting math. Assuming the servers do nothing but MapReduce, that each server works on only one job at a time, and that they work around the clock, that means MapReduce occupies about 139,000 servers if the jobs take 5 minutes each.
Wow, that was an interesting article. It’s easy to forget how amazing is Google, using it so much as we all do.
I have to wonder… is Google going to collapse under it’s own weight and complexity?
Or, if (when!) computing technology makes a radical change, will Google be able make the change with it? Will all that hardware be an anchor to old technology?
What OS do they use?
right Google OS…
I also couldn’t stop saying WOW!
270,000+ processors dedicated to a single task. Amazing.
It also, strangely enough, made me thing of Ray Kurzweil and the Singularity.
He believes that with enough computing horsepower, that the computer will achieve awareness. Well, it’s obvious to the rest of us that a very powerful computer is still a computer and even with the amazing computing power in Google, their servers have not become Colossus.
“[…]even with the amazing computing power in Google, their servers have not become Colossus.”
Or SkyNet…
I’m used to working with large numbers and the mind-numbing vastness of the Milky Way. But trying to comprehend 270,000 servers all interconnected is just as amazing.
What a great article, thanks Uncle Dave!
Not only that, but Google’s processing power has almost doubled in size in just over two and a half years.
They are building new datacentres, and replacing older hardware with newer designs at a massive rate.
Interesting take on the reliable software vs. hardware issue. That would take a good deal of stress off the sysadmins for sure.
Sort of makes some of the datacenters around this area (Des Moines, Iowa) look puny.
But if they had a datacenter in Des Moines, I would love to work for them, even part time.