Archives

August 2000

�

LAMB CHOP HOME

JOIN TEAM LAMB CHOP

TEAM STATISTICS

MEMBER CHARTS

MEMBER PROJECTED

MEMBER GRAPHS

ACTIVE MEMBERS

MEMBER OTHER

OVERALL CHARTS

OVERALL GRAPHS

CHART LEGEND

SETI BENCHMARKING

SETI TIPS, FAQs, et. al.

ARCHIVES

PUBLIC QUEUES

ARS SUPERCOMPUTER

SETI@HOME PAGE

ARS DISTRIBUTED FORUM

TEAM BEEF ROAST

TEAM CHILI PEPPER

TEAM CRAB CAKE

TEAM EGG ROLL

TEAM FROZEN YOGURT

TEAM PRIMORDIAL SOUP

TEAM PRIME RIB

TEAM STIR FRY

TEAM VODKA MARTINI

THE SUSHI BAR

ARS TECHNICA

LINKAGE

�

PERSONAL STATS LOOKUP:
SETI@Home ACCOUNT:

COMMENTS? EMAIL: WEBMASTER
(remove NO.SPAM)

Mad props go out to IronBits for hosting the site and all the others who have kept this site going for the past couple of years!.

August 13, 2000

The Comeback is Complete!

Members Work Units
Team Lamb Chop 2696 1221067
Art Bell 12706 1214357

No doubt about it now.� Team Lamb Chop has definitely moved into the #1 club team spot, and the #4 team spot overall.� There seems to still be a bunch of people jumping ship from Art Bell (women and children first!).� Sorry Art…you’re time has come and gone.� Now it is time to step aside and let the Lamb Chop Express through!
Don’t miss your chance, go to list of all australian online casinos only here good luck awaits you!

Hurry up and start winning with casino 25 euro bonus ohne einzahlung 2020 at our casino. Limited supply!

Who is next?� Well that must be Compaq.� Team Lamb Chop is currently 712,000 WUs behind them.� Yup that looks pretty substantial, but no lead is too large for the Lamb!� There is some serious firepower ahead of us, but you know what?� Sheer strength and determination can overcome the peddlers of puny Presario power.� Estimated Time of Arrival?� March 2001. Buy best baby toothbrush. Monitor your child’s dental health.

�

August 11, 2000

Any Day Now?
In the past two days there seems to have been an exodus from Team Art Bell.� They have actually lost about 9000+ work units in the past couple of days.� The team leader Art Bell is in name only now he has stepped down from his on air duties.� Many people looks to have jump ship for better leadership now their team looks squarely at being the #2 club team.� Couple that with the addition of members to Team Lamb Chop, and their actual lead over TLC is dwindling quite fast.� At the time of the update today this is how things stood:

Members Work Units
Art Bell 12705 1211563
Ars 2687 1209192

The difference is now down under 10K WUs.� The actual number for Art Bell is probably about 4000 WUs higher than their total above (the total is based on the top 9,999 members on their team.� Possibly by the time you get to work on Monday, the crushing of Art Bell will be complete! ;-).

*Another* New Beta?
Erik K. posted on the news group that the problem with the v 2.71 beta completion times of ~80% appears to be a bug, that they may have figured out.� I would expect a new beta to see if that worked around things…but there is a possibility if they know what was wrong for sure that they may just skip a final beta.� There has been no word if there actually will be a new beta or not though.

Wetware…YUM
I posed a question on the SETI newsgroup to Eric K. if any of the new fundage recently announced will help out the current S@H project, and if so if there would be any immediate benefits.� His response:

The biggest change will be in server wetware.� We’ll finally be able to hire a real Informix DBA, which will hopefully prevent future outages, and perhaps make them shorter.�� I’m not privvy to all the details of the deal, but my guess is that a large fraction of the funds will go to costs associated with the development of SETI@home II, which will probably include a new server configuration.��� We’re also looking at offloading the science post-processing from the science database server, which would reduce load on it and increase capacity.� Whether we will use the new funds for that depends upon a lot of things.

�

August 9, 2000

Passed Art Bell?
Some of you may have passed by the Overall Team Rankings on the S@H site today and scratched your head.� TLC Passed Art Bell today? (not really)

4) Team Ars Technica Lamb Chop 2661 1203397 1696.95 years 12 hr 21 min 10.0 sec
5) Team Art Bell 12700 1203335 3476.01 years 25 hr 18 min 16.5 sec

The reason?� Repeat the mantra:� backloooooooog…backloooooooog…backloooooooog…� It has been at least 3 or 4 months since the Team Pages on the S@H site has had accurate numbers for the teams.� The normal daily load on the data servers slow things down.� Add on top of that the page requests for individual, team, country, you name it stats…that leaves the stats servers bogged down also.� Then realize that the data and stats servers have to talk to each other, and then maybe you have an idea of what is going on over there.� The recent addition of the “last 10 results” page on the individual stats created even more havoc on an already overloaded situation.� If you want to see how slow things are over there, and the type of problems they have click on this link for the Team Art Bell stats.� Set up the sundial and time how long it takes for the CGI scripting to finish.� (I have to wait at least this long..even longer to pull the data for the daily stats from the pages).

Because of all this server stress, not all work units sent to the servers are posted to the team totals every day.� For example, with the additions to the team yesterday, in addition to the regular work units posted for TLC the team gained 16,343 WUs for the day.� The “total” on the team page only increased 5,974 WU for the same period of time.� That left 10,369 WU “lost” in the backlog.� Extrapolate that for 3-4 months, and you can imagine how far the servers are behind.� Because of this backlog, the numbers are quite a bit off of the total on the site.� Here is a table showing the Totals the SETI site says each team has vs. the “actual” totals:

Team SETI WUs Actual WUs Difference
SGI 2292216 2156098 -136118
Sun 2032439 2040524 8085
Compaq 1903387 1922919 19532
Art Bell 1204948 1220593* 15645
Ars 1206555 1193623 -12932
Intel 1048641 1055387 6746
Microsoft 990756 1009405 18649
MacAddict 958570 989105 30535
TKWSN 732184 773051 40867
CCI 647020 646755 -265

Those are quite a bit of differences.� The “actual” work units are a total of each individual members work units which are on the team at the time when the stats were pulled.� there is an * on the Art Bell number, because that is only the total for the top 9,999 members of the team.� That is the total number of members that the CGI team page shows.� I estimate that the number is actually about ~6,000 more than what is shown since member 9.999 has 3 work units and there are over 3,000 members who don’t show up on that page.

Passed Art Bell……..not yet!� Give it a week or two :)���� –zAmboni (da stats p1mp)

�

August 8, 2000

SETI@Home Gets a Deadline Extension
The original plan for the SETI@Home project only was to last for a total of two years.� But with news today, that deadline was extended for at least a couple of more years.� News comes in today from both San Jose Mercury News and SpaceRef.com, that the SETI@Home project has formed a partnership with Project Voyager, a media start-up company.� In addition to this partnership, The Planetary Society will assume the lead sponsorship for the project.� Currently the SETI@Home project hums along with an annual budget of around $400,000.� Amounts of the new donations were undisclosed, but is along the lines of “several million”.

What does this mean to the SETI@Home project?� In addition to the increased fundage, Project Voyager will help promote the project though various media elements.� Also the project will extend past the original project end in 2001, and it will expand.� They plan on using an Australian telescope to scan the Southern Hemisphere, and also plan on doing deeper analysis of the data.� This is good news since with the amount of current members in the project, there are too few work units to go around.

If people are worried about the commercialization of the project, those fears can be alleviated.� Project director David Anderson chimed in stating that the screensavers (clients) would not be contain advertising nor will the volunteers be subject from direct marketing from outside parties.

Happy Day!� Does this mean I will have to do the stats for a couple more years? 🙂

Gettin’ Closer Every Day
The gap between Team Lamb Chop and Art Bell is closing on a daily basis.� Today we had an influx of new members.� Several of the new members dropped into TLC’s arms from KWSN.� Leading the pack was Virus, who joined the team with over 7,000 Work units.� Other new joiners for today include Mark, CMH, GarandMan, Nikeboy, Shawn, and 1 Good Cop.� Welcome aboard all!

Right now we are only a mere 29,000 work units behind Art Bell!� At our current pace, we shall pass Art Bell around the end of next week.� kickin’!� The funny thing is that because of the backlog of stuff on the S@H site, they report that TLC is only 400 work units behind AB.� Because of the backlog though, they still haven’t processed the work units lost when guru and gang left the team at the beginning of July.� The numbers I have for the site here reflect the current status of the teams.

Other Team Stuff
I have been neglect in my duties to let you know what is going on in the daily team realm.� Today I will give ya some info!� Memnoch has been cruising lately, today he moved up one spot passing Gonzo for team spot #6.� He may have trouble trying to get past svdsinner at #5 though.� svdsinner is cranking out work units at a 220 per day clip.� If that isn’t good enuff for a per day average, Admins spend CPUs have been cruising in the stratosphere with a 345 WU/day average…more than svdsinner and Memnoch combined!

VulTure’s Den (#12) slipped up one spot in the past week.� Idle SouthPark Admin Crew fell prey to the VulTure slippling down to unlucky #13.� Knight has been on the move also this week, with an average bordering on 200 per day he has moved up 3 spots to #18.� Greens in Oz has been getting consistant output recently and moved up two spots to #23.� Virus‘ arrival made most of the team slip down a notch today…but there were several members in the top 50 who still moved up this week.� This includes Angus who moved up 4 spots to #33, Beyond moved up 3 spots to #37, and finally Del B. climbed up one spot to #45.�� –zAmboni

�

August 7, 2000

Housekeeping Day!
Been doing a bunch of cleanup on the site today.� Changed up some of the navbars, here on the font end and on the graphs and charts pages.� Hopefully they aren’t too confusing.� Also updated the weekly stats for the past week.� Looks like the team did pretty good in caching up those work units for the server downtimes.� Many people didn’t miss a beat!� There really isn’t any news today.� Hope this will tide ya over!��� –zAmboni

�

August 6, 2000

Beta Update Update
I was doing a little thinking done a bit ago, and came up with some theories on what is going on with the beta and the new science.� The reason for this thinking?� (yea I don’t think that often! :-).� It has to do with the benchmarks that I did on my system, and some inconsistencies with “normal” work unit times.� Here is a table with the evidence (NOTE: these are based on the v2.70 beta):

System Settings Memory Benchmarks Bench WU time “Normal” WU time
PIII @ 923 (142FSB) 3-2-3 409/464 3:21 3:52
PIII @ 866 (133FSB) 2-2-2 422/478 3:24 4:13

On the bench work unit the times were very close, probably insignificant.� The times to look at are the “normal” WU times.� If the memory bandwidth is a tad more on the lower CPU speed, why is the normal work unit time averages longer?� 21 minutes longer in fact!� I guess the thing to think about is the difference between the benchmark work unit and normal work units.� The benchmark work unit is usually faster, because it does not contain any gaussians (the normal work units all contained gaussians).� The reason why it processes faster is that it doesn’t do as many gaussian searches in the work unit.� With the version 2.70 beta, even though they made it more cache/memory bandwidth friendly, there are also CPU dependent stretches in processing the work unit.� To break things down, it looks like the FFT calculations are more memory bandwidth dependent, and the searching for gaussians/pulses/triplets depend on raw CPU speed.� The 20 minute difference in normal WU times, I believe, can be directly attributed to the higher CPU speed running at 923Mhz.� With the version 2.71 beta doing even more searches than the 2.70 beta, that difference in normal WU times should be larger.� Follow?� OK quiz tomorrow!

Beta Update
There doesn’t appear to be any glaring things wrong with the current v 2.71 beta that I have uncovered.� Run times do seem a bit longer on average.� This is because of the added searches being done in this beta.� Trying to predict run times with SetiSpy is now a bit hairy though.� The added searches (and FFTs)� are now at the beginning and the end of the work units.� When a work unit is started, the estimated time of completion is a bit high, moving into the middle of the work unit, the estimated time will go down significantly and the times would approach those of the 2.70 beta.� At the end of the work unit, there is the added searches, and the times will go up again.� These added searches seem to be done at the final 10% of the work unit.�

At first the run times seemed a bit more variable, but they now appear to be again in a “bimodal” distribution.� The thing with this bimodal distribution is that the run times are opposite from the beta 2.70.� As a refresher, on the beta 2.70 client, there was added searches being done on work units with angle_ranges < 0.1.� These work units were significantly longer than those with angle_ranges > 0.1.� Now with the beta 2.71, additional searches (and FFTs) are being done on all work units….well *almost* all work units.� The following is a plot of run times vs the angle ranges on several work units done with the 2.71 beta:

Why the shorter times for the two work units?� Those two work units did not contain gaussians.� Eric K. a week or so ago stated that if the telescope scans the sky too fast or too slow (i.e. high or low angle ranges), gaussian searches are not done (check July 22nd news).� It is unclear if pulse or triplet searches are being done at this time.� Because these work units do not do gaussian searches they are significantly faster than the others.�

Server Outages This Week!
Ripped directly from the technical news page at the SETI home offices comes this advanced warning:

August 4, 2000
Individual user stat lookups have been turned back on. The compilation of group stats will hopefully be turned on soon.

There will be 2 planned outages next week. Both of these are UC Berkeley network outages unrelated to SETI@home. We will however be taking advantage of these outages for some systems maintenance. The first will begin 8/7/00 at 13:30 UT and should not last more than 30 minutes. The second will be on 8/9/00, beginning at 13:00 UT and should not last more than 2 hours.

Be prepared again….as usual…sometimes those server outages take quite a bit longer than expected!�

One note on the database problems they had before.� The backlog that they had before seems to still exist.� Results are still laggy in being posted to the team totals.� Granted this is only from a day or two since they have had the stats back up…but their numbers and the actual team numbers are still a bit different.� –zAmboni

�

August 5, 2000

Beta v 2.71 is Out
For those of you who have been testing the beta check your email (if you haven’t already).� For those of you who haven’t been testing, here is some info on the new beta, what it fixes, and what to expect out of it.� I will post what was in the email with my comments on the performance of the new beta that I am currently running.

The SETI@home beta client version 2.71 fixes the major bugs that the beta testers have reported.� These include the negative values sometimes displayed for pulse and gaussian scores, repeated client restarts, and a variety of graphics display problems.

NOTE ON FFT GRAPHS:
To take better advantage of the science code’s speed, the client now continues calculating additional FFTs and searching for Gaussians, Pulses and Triplets while it finishes drawing an earlier FFT.� This means we now draw only those FFT graphs for which there is time without pausing the science code.

As a result, the FFT graph will not necessarily represent the Doppler shift rate and resolution indicated in the “Data Analysis” display panel; the information in that panel always indicates what the scientific code is doing at the time, not necessarily what is shown in the FFT graph panel (the bottom section of the display).

I personally didn’t see any of the problems with the negative values, and client restarts, and only saw graphic anomalies once…but that didn’t really bother me.� The bottom graph now runs a bit weird, but I think it is what is expected.� The previous clients showed a kind of choppy movement down the time axis, this one is different.� When it wants to show a new block of data it draws a small slice and that slice slides down the time axis a bit.� It is hard to explain….but maybe you can visualize it.� If not, then have a few beers and then think about it again :).� The processing seemed to go smoothly though, and my CPU, I believe, is fast enuff for the graphics to keep up with the processing.� (but hey I keep it minimized so it doesn’t really matter!)

NOTE ON SPEED:
The new beta version is not as fast as the previous 2.70 version. This is due to an extension we made to the chirp rates analyzed by the client.� Now, the client looks for signals at chirp rates ranging from -50 Hz/sec up to +50 Hz/sec.� Signals with a chirp rate in the added range (-50 Hz/sec to -10 Hz/sec and +10 Hz/sec to +50 Hz/sec) would probably originate from a source accelerating at a high speed, such as an extraterrestrial satellite or moon. This analysis for satellite or moon based signals opens up a whole new area of searching that has never been performed before.
Overall, the client should run roughly 50% slower than it did in version 2.04.

More science = good.� If you have a faster FFT might as well take advantage of it right?� Why did I highlight that last sentence?� Well about the performance….this is the same statement that they made with the original beta, and that was what they were sort of shooting for performance wise with the new client.� I am running the client right now on a regular work unit and it isn’t running that much slower than with the previous (v 2.70) beta.� It is currently expected to finish in about 4:20, which is within the range of the last several work units I finished with the v 2.70 beta.� It is tough to compare times since I have messed around with my system settings last night (more about that below), and I cant compare times with previous results.� I personally think that the times will not be 50% slower than version 2.04, but YMMV.

Assuming there are no significant bugs found in this version, this will probably become the final version 3.0 that will be released

Good news….lets hope the official version 3.0 release is soon.� –zAmboni

More System Tweaking
If you saw yesterday, I reported on ColinT’s enabling interleaving on the VIA chipsets.� Today we will look at some more memory tweaking on the BX chipset.� Last night I downloaded both WPCREDIT and WPCRSET from master tweaker H. Oda’s site.� What are these two utilities?� WPCREDIT is a utility, that edits the PCI configuration register.� Many of the registers can be configured in your system BIOS, but some of them aren’t, and can be accessed with an edit program like WPCREDIT.� The problem with WPCREDIT is that it doesn’t actually save the settings you change, and a reboot will bring back the old settings.� WPCRSET is a utility that loads on startup in which will change the specific register settings you want.

I won’t go into how you can use the two utilities, but I will pass on what I changed and how it effected performance.� I did a search on the above two programs to see what other people have done with them (actually wanted to find out HOW to use the darned thing!), and came up with this page on how to maximize the performance of the BX chipset.� I specifically wanted to find out how to increase memory bandwidth performance, and he had several tweaks to help improve it.� Several of the tweaks I already did in my BIOS (SoftMenu III on a Abit BE6-II), such as the vaunted 2-2-2 RAM timing.� The five tweaks I tried were the following:

  1. Leadoff Command Timing (offset 76) – 3 CS#
  2. DRAM Leadoff Timing (offset 77) – bits 0 and 1 set to 00
  3. Host Bus Fast Data Ready (offset 52) – enable
  4. DRAM Idle Timer (offset 78) – 128 clks or infinite
  5. DRAM Refresh Rate (offset 57) – 249.6us

My system base settings were the following:� PIII 650E @ 866MHz (133FSB) Memory Timing 2-2-2, Precharge Control Enabled.� With these settings I had SIsoft Sandra 2000 Memory Benchmark Scores of 422 CPU / 478 FPU.� The two settings you would expect to see the greatest increase in bandwidth would be from the leadoff timing and command timing tweaks…the others I expected to get minor changes…� Here are the results of the tweaks:

Tweak Changed to CPU Bench (MB/s) FPU Bench (MB/s)
Default 422 478
#1 3 clocks 426 479
#2 no delay 432 501
#1 + #2 combo of tweaks #1 and #2 system rebooted
#2 + #3 + #4 + #5 idle timer = infinite, refresh 249.6us 433 503

Definitely the combo of tweaks improved my memory bandwidth, the biggest change happened when changing the DRAM Leadoff Timing to “no delay”.� The others were minor.� I had a conflict/problem when I tried to change both the settings for Leadoff Command Timing, and DRAM Leadoff Timing.� I am not sure what the difference between the two of them are though.� It worked ok changed them individually, but when I tried to combine both tweaks, the system immediately rebooted.�� Not a shutdown or lockup, mind you.� It rebooted when I hit the “set” button.�� I have yet to do a couple of benchmarks to see if the tweaks result in improved SETI client performance, but give me a couple of days, I’ll get that info to ya :).� Unfortunately, the BX chipset doesn’t have a cool “interleave” feature, but it looks like there is some tweaks that *can* improve your system performance.� –zAmboni

�

August 4, 2000

Lions and Tigers and Stats OH MY!
The stats are coming!� The stats are coming!� Yes after a several day delay they are back online.� When I get through with this diatribe I will upload the good.� They are done, just need to upload them :-).� Put down that needle!� here comes your fix!� –zAmboni

Yes boys and girls the stats are running!
We have yet to hear the official announcement for Berkeley.� But we can confirm that the stats are running.� Has anyone seen zAmboni?� I am sure that he is busy charting out the latest stats as I write this.� As soon as more news is known we will be sure to post it A.S.A.P.� But hey what else is there to know?� The stats are up!� Sounds like a reason to celebrate.� I would like to take this moment to thank everyone on the team for all their hard work and addiction� . . . I mean dedication.� This team is definitely going to be the one to beat in the up coming months.� For those of you who don’t know me I am knight, friend of Rat and many more.� I am here to help out whenever possible.� Hopefully I won’t screw up anything.� Luckily we don’t have any Informix database’s for me to play with.� Well hope to hear from you guys on the thread.��-knight

�

August 3, 2000

Frustration Starts to Set in…
As much as most of you enjoy taking a look at the team and member stats, I actually do enjoy doing them.� The origins of this site is based on the statistical information provided from the SETI@Home website.� Even the benchmarking pages were started up in an effort to help improve result output, and therefore member and team stats.� I have known for a while that their statistics servers have been swamped.� This had caused a backlog of results to be posted on the stats site.� My normal way of doing the stats (team-wise) didn’t really reflect the actual standings.� Luckily I was able to get and manipulate the data in a different way which gave a more accurate standing of the teams and the members.� Unfortunately, with the stats on the SETI@Home site being down, I “cannot do my job”

Sometime this afternoon they updated the technical news page on the site stating the following:

Our databases are under major stress at this point and cannot handle the extra traffic of all the CGI programs. Under normal conditions, we receive as much as 5-10 user stats requests a second, 24 hours a day. If we turn the user stats program back on, our science database will not be able to keep up with the data server, and we could potentially lose science results.

As well, we are not running the programs that routinely update the stats pages (like domain/team/country totals, for example).

We understand that the statistics are a major part of the interest in SETI@home, and are currently doing our best to get the database back in working order.

To be honest, I think it may be a while before they get things back up and going.� The root of the problem appears to lie with the choice of databases when they started the project, and the inherent limitations with that database.� To give a bad analogy…If you build your house on a foundation of sand, with enuff weight on top of it, the house will eventually sink.� The S&H guys picked Informix for their database when starting the project, it may not have been the best choice, but it was free.� Of course when they were planning the S@H project, they didn’t expect the response to the project that they have had.� Even with the Informix limitations, they probably thought it would serve them well with the amount of users they expected.�

The statistics that the SETI guys provide, has been one of the reasons why the project has been such a success as it has, and it also has been one of the big problems.� The more and varied the statistics that they provide, the more time it takes to do them, and the less performance overall.� Originally the statistics only provided for the top 100 members on a team, top 100 teams in each group, and limited individual member stats.� That top 100 jumped to the top 200, and before the downtime, the top 9,999 members/teams in each category.� Recently they even added viewing of the last 10 results a member turned in.� Right now we don’t have any of these.

How can they fix this?� Well I guess the most optimal solution, would be better hardware and a better database to work with.� Unless some hardware manufacturers chip in with some donations, the former won’t happen, and at this point in the game switching to a different database would be out of the question.� The way it sounds, the team is working on trying to “patch” up things, and try to work around the limitations in the database and their server setups.� I don’t know about you, but I keep get this image of the crew trying to bail water out of the Titanic.�

I am sure that they will get the stats back up…but in what form?� I have a feeling that the stats will be limited in their scope though.� How limited?� Guess we all will just have to wait and find out.

Tweaking Your System
With the current 2.4 version of the S@H client, if your CPU doesn’t have a huge L2 cache, memory bandwidth may be a limiting factor in client performance.� Reports from the beta client seems that this dependency on memory bandwidth may have decreased, but it is still there (and I have the results to prove it ! :-).� ColinT has reported in that there is a utility out there that will help those with VIA chipsets improve their memory bandwidth.� He found on the Asus forums about this utility, which allows you to set the memory to a 4-way interleave which can improve your memory benchmarks (and hopefully your S@H performance).� You can see his post about this on the SETI/RC5 forum.�

Also in the thread Exchequer points out yet another handy tweak from H-Oda called WPCREDIT which you can edit settings for your memory on just about any chipset (among other things).� The program is a bit confusing to use though, but it may help out those without VIA motherboards.� You can check it out here.� I have downloaded it, and will be trying it out.� I will let ya know what I found. zAmboni


August 2, 2000

Still Waiting…
The CGI pages are still offline over at the SETI home offices.� There does seem to be a bit of movement there though…instead of the previous “web page missing” note when trying to load the pages, I am getting an “access forbidden” page.� They may be working on things and hopefully things will be back up soon….I have some stats I need to process for y’all!� –zAmboni

�

August 1, 2000

A Tip Top Twofer!
To help you tide your stats withdrawal, we have posted a Tips Twofer!� First up is Geordie who gives us an overview of the different caching options available for your S@H client.� Second up Poof weighs in with a SetiDriver tutorial.� It may be a tad late to queue up for the previous outage, but these tips will help you prepare for the next downtime.

Microsoft Windows ET
Even though Windows Me hasn’t even hit the shelves, there are already plans to expand.� The DOJ may have put the clamps on monopolizing the mere mortals here on Earth, but do they have jurisdiction “out there”? Microsoft co-founder Paul Allen and former M$ CTO Nathan Myhrvold are donating $12.5 million to help finance an array of small radio-telescope dishes to be used in the Search for ET.� Currently the different SETI searches spend a heaping load of money to buy time to use different telescopes and hardware to aid in their search.� This new array will give them hardware of their own to use for their searching.� Don’t expect to see any direct benefit from this influx of cash on the current SETI@Home project though.� It looks like this money is earmarked specifically for the building of the radio-telescope array, and will not be funneled into other projects ongoing.� The timetable?� Look for the radio-telescope hardware to be in place by 2003 and fully operational in 2005.� You can check out stories on the new array on wired, and at space.com.

Servers ‘R’ Up (I think)
Here is the news straight from the horses mouth:

We successfully migrated the science database to the new devices. However, we have run into a pre-existing informix limitation on the number of “extents” that we can allocated to the spike table. We have created a secondary spike table so that results can be accepted, but this breaks our query code. Thus, CGI’s will remain off until we can find a work around, but the data server should be up. Please note that there may be scattered restarts of the server over the next couple of days.

A quite unrelated problem is that the Berkeley campus backbone is experiencing Internet connectivity problems which may make it appear that the data server is down.

Its good that they got the science database all fixed up…the Informix problem is another thing.� What does this mean for you and me and everyone else right now?� Because the CGI’s are off, 1) you cannot view your account information, and personal stats and 2) I cannot do the stats update until they get it fixed.� Bummer.� Right now there is almost no point in trying to rush and dump your queues unless you are running short of work units.� No use trying to post results when I can’t do the stats :/.� I wish I had some more cool news to pass along, but there isn’t any…���� –zAmboni

�

-Front Page