Pages: [1]
Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,333

2010-02-03 20:05:57


By: John Vickers - Project Scientist


We have been getting a lot of questions lately about whether or not we are getting real scientific results, why the new sgr runs are getting better likelihoods, what the numbers crunched actually mean, what our scientific goals are, etc. In return we've posted our publications on the front page-- but, like some other BOINC user said: you guys aren't astrophysicists. I'm going to try and summarize the physics portion of this project in layman’s terms-- from the data collection to the current achievements to the future plans.

In the beginning there were stars. Then there were people, and every night these people would look up into the skies and they would notice how intriguingly complex a place the heavens really were. Wandering planets travel throughout an ever turning mosaic of mythology and mysticism. The sky was one of the biggest mysteries of the ancient world-- what do these pictures mean?

Our project begins with the Sloan Digital Sky Survey, an ambitious project whose goal is to map out as large a portion of the sky as possible. To this date the SDSS has mapped about a quarter of the sky, including over 300 million objects



More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,333

2010-02-06 07:31:30


BSN*: What performance gain was achieved after you completed the optimization work on the code?

Gipsel: It depends on what are you comparing. The original source code of Milkyway@Home was grossly inefficient; it simply wasted a lot of time. The first things to do were no optimizations in a common sense; one had to clean up the algorithm Milkyway@Home is using. In the meantime most of my suggestions for improvements were implemented in the sources maintained by Milkyway@Home. That brought the calculation times for a workunit (WU) down in a massive way.

Using my CPU-optimized code, 65nm Core2 or a Phenom running at 3GHz will take just slightly above four minutes to crunch one of today’s short WUs. The stock applications distributed by the project are a bit slower; they take between about 10-18 minutes. In November 2008, it would have taken a full day for the same WUs on the same CPUs (MW uses longer WUs now). By doing my optimizations into account, Milkyway@Home experienced a speedup of factor 100 on the CPU alone.

But I think readers are mostly interested about the GPU application. ATI Radeon HD4870 completes the same WUs in only nine seconds. Since Quad-core CPU calculates four WUs at once, a 3GHz quad will effectively complete four WUs in about four minutes with the fastest WU. At the same time, ATI’s Radeon HD4870 will complete 25 WUs - six times the throughput for about the same price. Even a last-gen Radeon HD3800 will complete 8-10 WUs in four minutes, still more than double what a fast quad-core CPU can do. If you summarize all the improvements, you see that a single HD4870 is now doing more science than the whole project did couple of months ago! If you compare the beginning of the project with today’s situation, you could claim a gain from “one WU a day” on a single Core 2 processor @3GHz to almost 10,000 WUs a day with a HD4870 [this is a live testament what code optimization can achieve - imagine if every application would have such a dedicated code-optimizer - Ed.].


More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,333

2010-02-06 08:18:41


Accelerating the MilkyWay@Home volunteer computing project with GPUs


This paper discusses the implementation and optimization of the MilkyWay@Home client application for both Nvidia and ATI GPUs. A 17 times
speedup was achieved for double precision calculations on a Nvidia GeForce GTX 285 card, and a 109 times speedup for double precision calculations on an ATI HD5870 card, compared to the CPU version running on one core of a 3.0GHz AMD Phenom(tm)II X4 940.

Performing single precision calculations was also evaluated, and the methods presented improved accuracy from 5 to 8 significant digits for the final results. This compares to 16 significant digits with double precision, but on the same hardware, using single precision further increased performance 6.2 times for ATI, and 7.8 times for the Nvidia card. Utilizing these GPU applications on MilkyWay@Home has provided an immense amount of computing power, at the time of this publication approximately 216 teraflops.


More . . .

BAM!ID: 64136
Joined: 2025-05-03
Posts: 0
Credits: 0
World-rank: 0

2010-02-11 10:29:34

PCs Around the World Unite To Map the Milky Way

Combined computing power of the MilkyWay@Home project
recently surpassed the world’s second fastest supercomputer


At this very moment, tens of thousands of home computers around the world are
quietly working together to solve the largest and most basic mysteries of our galaxy.

More . . .
Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,333

2010-02-28 19:03:20


MilkyWay@Home project has thousands of computers looking into space


In eight years of operation, the Sloan Digital Sky Survey (SDSS) has obtained deep, multi-color images covering more than a quarter of the sky and created 3-dimensional maps containing more than 930,000 galaxies and more than 120,000 quasars. The SDSS used a dedicated 2.5-meter telescope at Apache Point Observatory, New Mexico.

The 120-megapixel camera imaged 1.5 square degrees of sky at a time, about eight times the area of the full moon. A pair of spectrographs fed by optical fibers measured spectra of more than 600 galaxies and quasars in a single observation. A custom-designed set of software pipelines kept pace with the enormous data flow from the telescope, according to the SDSS Web site.

Each user participating in the project signs up their computer and offers up a percentage of the machine's operating power that will be dedicated to calculations related to the project. For the MilkyWay@Home project, each personal computer is using data gathered about a very small section of the galaxy to map its shape, density, and movement, according to Rensselaer.



More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,333

2010-05-27 11:34:26


PCs around the world unite to map the Milky Way


Combined computing power of the MilkyWay@Home project recently surpassed the world’s second fastest supercomputer


Enthusiastic and inquisitive volunteers from Africa to Australia are donating the computing power of everything from decade-old desktops to sleek new netbooks to help computer scientists and astronomers at Rensselaer Polytechnic Institute map the shape of our Milky Way galaxy. Now, just this month, the collected computing power of these humble home computers has surpassed one petaflop, a computing speed that surpasses the world's second fastest supercomputer.

The project, MilkyWay@Home, uses the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which is widely known for the SETI@home project used to search for signs of extraterrestrial life. Today, MilkyWay@Home has outgrown even this famous project, in terms of speed, making it the fastest computing project on the BOINC platform and perhaps the second fastest public distributed computing program ever in operation (just behind Folding@home).

The interdisciplinary team behind MilkyWay@Home, which ranges from professors to undergraduates, began the formal development under the BOINC platform in July 2006 and worked tirelessly to build a volunteer base from the ground up to build its computational power.




More . . .

Pages: [1]

Index :: The Projects :: MilkyWay@Home Progress in plotting the stars
Reason: