Roland Piquepaille's Technology Trends
How new technologies are modifying our way of life


samedi 4 septembre 2004
 

An international group of cosmologists, the Virgo Consortium, has realized the first simulation of the entire universe, starting 380,000 years after the Big Bang and going up to now. In "Computing the Cosmos," IEEE Spectrum writes that the scientists used a 4.2 teraflops system at the Max Planck Society's Computing Center in Garching, Germany, to do the computations. The whole universe was simulated by ten billion particles, each having a mass a billion times that of our sun. As it was necessary to compute the gravitational interactions between each of the ten billion mass points and all the others, a task that needed 60,000 years, the computer scientists devised a couple of tricks to reduce the amount of computations. And in June 2004, the first simulation of our universe was completed. The resulting data, which represents about 20 terabytes, will be available to everyone in the months to come, at least to people with a high-bandwidth connection. Read more...

Here is a general overview of the project.

The group, dubbed the Virgo Consortium -- a name borrowed from the galaxy cluster closest to our own -- is creating the largest and most detailed computer model of the universe ever made. While other groups have simulated chunks of the cosmos, the Virgo simulation is going for the whole thing. The cosmologists' best theories about the universe's matter distribution and galaxy formation will become equations, numbers, variables, and other parameters in simulations running on one of Germany's most powerful supercomputers, an IBM Unix cluster at the Max Planck Society's Computing Center in Garching, near Munich.

Now, here some details about this cluster -- and its limitations.

The machine, a cluster of powerful IBM Unix computers, has a total of 812 processors and 2 terabytes of memory, for a peak performance of 4.2 teraflops, or trillions of calculations per second. It took 31st place late last year in the Top500 list, a ranking of the world's most powerful computers by Jack Dongarra, a professor of computer science at the University of Tennessee in Knoxville, and other supercomputer experts.
But as it turns out, even the most powerful machine on Earth couldn't possibly replicate exactly the matter distribution conditions of the 380 000-year-old universe the Virgo group chose as the simulation's starting point. The number of particles is simply too large, and no computer now or in the foreseeable future could simulate the interaction of so many elements.

To understand why such a powerful system cannot handle this simulation in a reasonable amount of time, we need to look at the parameters of this simulation.

The fundamental challenge for the Virgo team is to approximate that reality in a way that is both feasible to compute and fine-grained enough to yield useful insights. The Virgo astrophysicists have tackled it by coming up with a representation of that epoch's distribution of matter using 10 billion mass points, many more than any other simulation has ever attempted to use.
These dimensionless points have no real physical meaning; they are just simulation elements, a way of modeling the universe's matter content. Each point is made up of normal and dark matter in proportion to the best current estimates, having a mass a billion times that of our sun, or 2000 trillion trillion trillion (239) kilograms. (The 10 billion particles together account for only 0.003 percent of the observable universe's total mass, but since the universe is homogeneous on the largest scales, the model is more than enough to be representative of the full extent of the cosmos.)

With these ten billion points, the Virgo team faced a serious challenge.

The software [astrophysicist Volker Springel] and his colleagues developed calculates the gravitational interactions among the simulation's 10 billion mass points and keeps track of the points' displacements in space. It repeats these calculations over and over, for thousands of simulation time steps.
The simulation, therefore, has to calculate the gravitational pull between each pair of mass points. That is, it has to choose one of the 10 billion points and calculate its gravitational interaction with each of the other 9 999 999 999 points, even those at the farthest corners of the universe. Next, the simulation picks another point and does the same thing again, with this process repeated for all points. In the end, the number of gravitational interactions to be calculated reaches 100 million trillion (1 followed by 20 zeros), and that's just for one time step of the simulation. If it simply chugged through all of the thousands of time steps of the Millennium Run, the Virgo group's supercomputer would have to run continuously for about 60,000 years.

Because it was obviously unacceptable, Springel and his colleagues used a couple of tricks to reduce the amount of computations.

First, the researchers divided the simulated cube into several billion smaller volumes. During the gravitational calculations, points within one of these volumes are lumped together -- their masses are summed. So instead of calculating, say, a thousand gravitational interactions between a given particle and a thousand others, the simulation uses an algorithm to perform a single calculation if those thousand points happen to fall within the same volume. For points that are far apart, this approximation doesn't introduce notable errors, while it does speed up the calculations significantly.

They used another method for short distance interactions.

Springel developed new software with what is called a tree algorithm to simplify and speed up the calculations for this realm of short-distance interactions. Think of all 10 billion points as the leaves of a tree. Eight of these leaves attach to a stem, eight stems attach to a branch, and so on, until all the points are connected to the trunk. To evaluate the force on a given point, the program climbs up the tree from the root, adding the contributions from branches and stems found along the way until it encounters individual leaves. This trick reduces the number of required calculations from an incomputable n2 to a much more manageable n log10n, says Springel.

After these two tricks were introduced into the software, the simulation started. And it was completed in June 2004, generating about 20 terabytes of results. These results, which represent 64 snapshots of a virtual universe, will be available to all of us in the months to come. But who will really have access to such an amount of data outside universities and research centers? My guess is that the Virgo Consortium will find a way to reduce the size of the snaphots for regular folks. So stay tuned for the next developments.

Sources: Alexander Hellemans & Madhusree Mukerjee, IEEE Spectrum, Vol. 41, No. 8, P. 28, August 2004


6:51:36 PM   Permalink   Comments []   Trackback []  


Click here to visit the Radio UserLand website. © Copyright 2004 Roland Piquepaille.
Last update: 01/11/2004; 09:05:28.


September 2004
Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    
Aug   Oct


Search this blog for

Courtesy of PicoSearch


Supported by
BigFitness.com

If you're tired to read about technology, it's time to take a break.
Try their exercise and fitness equipment.
Read more


Personal Links



Other Links

Ars Technica
Bloglines
BoingBoing
Daily Rotation News
del.icio.us
Engadget
Feedster
Gizmodo
I4U News
Mindjack Daily Relay
Nanodot
Slashdot
Smart Mobs
Techdirt
Technorati


People

Paul Boutin
Dan Gillmor
Lawrence Lessig
Jenny Levine
Karlin Lillington
John Robb
Dolores Tam
Jon Udell
Dave Winer


Drop me a note via Radio
Click here to send an email to the editor of this weblog.

E-mail me directly at
pique@noos.fr

Subscribe to this weblog
Subscribe to "Roland Piquepaille's Technology Trends" in Radio UserLand.

XML Version of this page
Click to see the XML version of this web page.