Hee, hebben jullie nu de hele tijd op mij zitten wachten?
En met zoveel geduld, dat ik de eerste plek-onder-de-stats krijg? Wow!
Nog even geduld, eerst even Griphin welkom heten.

Welkom Griphin
Zo, dan hebben we dat ook weer gehad.
Bij een gebrek aan tech_news, heb ik maar een _oude_ quote opgeduikeld. Ter compensatie voor de leeftijd, heb ik een groot lapje tekst genomen

24 Jan 2007 20:14:11 UTC
As time goes on more SETI@home pariticipants are frustrated with the perceived lack of scientific progress. So what's the deal?
Keep in mind the essence of the science is actually really simple. We get lots of data. The function of the SETI@home clients is to reduce this data. They convert a sizeable chunk of frequency/time data into a few signals. These signals alone aren't all that interesting. It is only when similar signals (similar in frequency, shape, and sky position) are found over observations that happen at different times. The process of "matching" these similar signals is the crux of the problem SETI attempts to solve.
During the first five years of SETI@home (the "Classic" years) we never really had a useable science database. Signals were gushing into our servers which, at the time, could barely handle the load of simply inputting them into the database, much less validating the actual science before doing so. Technology improved and we have somewhat better servers (and the more streamlined BOINC server backend) so that we can validate the science and input incoming signals in "real time" (i.e. as fast as they come in). Nevertheless, the result is we still have an unwieldy science database with almost a billion signals in it, and we're adding two million more every day.
But we had to get to this point where we are today, which meant creating the first "master science" database of validated signals from Classic via a long and painful process of "redundancy checking." Then we had to merging the Classic and BOINC science databases into one. We then had to migrate the data onto a bigger, better server. We still have yet to do the big "database correction" where we clean up the data for final analysis. Each of these projects took (or will take) at least several calendar months. Why? Well, there are lots of important sub-projects and "getting your ducks in a row." Plus a lot of care and planning goes into moving this data around which vastly slows down progress - for example, each big step along the way requires backing up all the data, which takes an uninterrupted calendar day or two. We don't want to be too cavalier with our master science database, you know? Plus we don't do anything too crazy towards the end of the week so we don't have to deal with chaos over the weekend. So often we're waiting until Monday or Tuesday to do the next small step.
Now factor in the lack-of-manpower issue. I disappear for months at a time to go play rock star (SETI@home is my day job - in return for being overworked/underpaid while I'm here, I get incredible flexibility to do whatever I want whenever I want). Jeff goes backpacking in the Sierras. Eric has a zillion other projects demanding his time, as does Dan. Frequenty somebody is sick or dealing with personal issues and out of the office. When we're a "man down" every "big" project comes to a standstill, as we barely have enough people to maintain the day-to-day projects. And when an unpredictable server crisis hits the fan all bets are off.
Nevertheless, we did manage to squeeze out one set of candidates back in 2003 for reobservation. Basically, we were given a window of opportunity to observe at Arecibo positions of our choosing, and so we all dropped everything and threw all our effort into scouring our database for anything interesting to check out. The database was much smaller then. We also still had some funds from the glorious dot.com donation era, so our staff was a bit larger (we had scientific programmer Steve Fulton and web programmer Eric Person working full time, as well as various extra students). While the science behind this candidate run was sound, we didn't have much time to "do things right" so the code generated to scour our database and select candidates was basically a one-shot deal. The code did the job then, but is basically useless now - something we were well aware of at the time but due to time constraints couldn't do anything about it.
So where are we now and what's next? Only relatively recently do we have our science database on a server up to the task of doing something other than inserting more signals. As mentioned above we need to do a big "database correction" - I'm sure more will be written up about this in due time. Then we need to develop the candidate hunter, a.k.a "persistency checker" which runs in real time. This latter project recently got an advance kick in the butt thanks to a new part-time programmer (Daniel) working on skymaps for our website - this shares a lot of the code with the persistency checker. As well we have the new multibeam receiver on line and collecting lots of data. Tens of terabytes so far, in fact, which will hopefully be distributed to y'all in the coming weeks/months.
I'll finish up by stating that the actual science, while moving at glacial speeds, is not going stale. Remember these signals are potentially coming from light years away - so a few years' delay isn't going to hurt the science. It will hurt user interest, though, which is always a concern and a source of frustration for us - that's a topic for a later time.
- Matt
Ouwe troep? Wat is dat?.......Alles is leuk, zelfs modelracing..........BOINC ook mee met DPC!
......Team Grazzie~Power....!! Mooooooeeeee......