Archives

February 2001

�

LAMB CHOP HOME

JOIN TEAM LAMB CHOP

TEAM STATISTICS

MEMBER CHARTS

MEMBER PROJECTED

MEMBER GRAPHS

ACTIVE MEMBERS

MEMBER OTHER

OVERALL CHARTS

OVERALL GRAPHS

CHART LEGEND

SETI BENCHMARKING

SETI TIPS, FAQs, et. al.

ARCHIVES

PUBLIC QUEUES

ARS SUPERCOMPUTER

SETI@HOME PAGE

ARS DISTRIBUTED FORUM

TEAM BEEF ROAST

TEAM CHILI PEPPER

TEAM CRAB CAKE

TEAM EGG ROLL

TEAM FROZEN YOGURT

TEAM PRIMORDIAL SOUP

TEAM PRIME RIB

TEAM STIR FRY

TEAM VODKA MARTINI

THE SUSHI BAR

ARS TECHNICA

LINKAGE

�

PERSONAL STATS LOOKUP:
SETI@Home ACCOUNT:

COMMENTS? EMAIL: WEBMASTER
(remove NO.SPAM)

Mad props go out to IronBits for hosting the site and all the others who have kept this site going for the past couple of years!.

March 1, 2001

Bah!� Date Pushed Back.
Ok this is the latest news….looks like things are back to Friday…..LATE Friday till things will be back up:

This morning we (CNS) were told that the repairs would be finished sometime today. Unfortunately that’s not the case. The carrier has finished pulling new cable, but now LBNL has to splice the new cable into their own fiber plant. This process will not be completed until late in the day Friday, 2 March.
Hurry up and start winning with online casino 25 freispiele ohne einzahlung at our casino. Limited supply!

Oh well..

Date Pushed Up.
The message now on the S@H site shows that the expected date for the fixes being pushed up to today.� The message now reads:

Contractors are pulling new cable. We now expect that service to SSL will be restored sometime today, Thursday, 1 March 2001.

The servers aren’t up yet (5:00pm EST)…and of course we’re not sure when they will finally be up.� I still have a couple of hours till I am hurting….I have a small handful of work units left in my SetiQ stash :).� Lets hope it will be soon.� Then the mad rush for downloading work units will begin!
Don’t miss your chance, go to latest australian online casinos only here good luck awaits you!

�

February 28, 2001

Servers Down Until Friday???
It looks like there was a message on the Berkeley communications and networks site, which now appears when you try to access the main S@H site.� Here it goes:

Fiber cut silences SETI@Home

At about 3:30 AM PST on 27 February an optical fiber cable connecting the U.C. Berkeley campus with the Lawrence Berkeley National Laboratory was cut, apparently by vandals trying to “salvage” copper from other nearby cables. Buy best baby toothbrush. Monitor your child’s dental health.

The broken fiber carries data and voice connections for LBNL and also for the Space Sciences Lab. SSL is where the SETI@Home project is located, so the millions of participants helping to analyze data have been unable to contact the SETI@Home servers for more than a day.

Contractors are pulling new cable now. It’s expected that service to SSL will be restored by Friday, 2 March 2001. We’ll update this page as we learn more about the progress of the repairs.

Ug.� Friday eh?� Of course, when I installed the new SetiQueue beta, I decided to down the amount of work units I needed to cache seeing that the S@H servers were down for several hours at most.�� I only had a 3 day stock of work units in my queue…I am going to run out of work units sometime tomorrow night.� Oh well.� It has been a bit over a year since a day has gone by that I had not crunched a work unit on my machines here, and that streak may come to an end.�

I guess Michael below was correct seeing he said “DO NOT QUOTE!!!”.� It wasn’t a construction crew that caused the damage….but more like a “DESTRUCTION” crew.� It is kind of sad that the entire project would have to be shut down because of this.

�

February 27, 2001

How to Bring Down a Project
All you need is 1) a construction worker, and 2) a backhoe.

I am sure you are waiting for the servers to get back online to dump your caches, and fill them back up.� The Berkeley servers have been down for the most part of the day…but they really haven’t been down themselves.� The servers are probably up and running fine, but they are kind of useless when no one can connect to them.� So what happened?� Well I’m not too sure exactly what is going on, but here is the latest found on alt.sci.seti:

Well, we seem to have isolated the cause:

NOTE: THE FOLLOWING IS AS YET UNCONFIRMED!!!!� DO NOT QUOTE!!!

At about 0330 PST (GMT -0800), a contractor (possibly working for either the City of Berkeley or the Lawrence Berkeley National Lab or maybe even the University) cut a fiber bundle that connects voice and data between the UC Berkeley campus and LBL and other buildings at the top of the hill, such as the Space Sciences Lab, home of seti@home.� I wish it were something more exciting like bombs or rebellious office computers.

Based on past experiences, this will probably require a manual splice, which will likely take all day.

So sit back, relax, hang out at the Clinic or whatever, and give your computer a rest.

One more thing, if you’re one of those people who likes to complain about lack of redundancy between the SETI servers and the rest of the world, be prepared to offer free co-location at an enterprise grade facility and/or provide all funding for said service as part of your “suggestion.”

Remember: ET isn’t paying for this.

🙂

Michael Sinatra
Network Services
UC Berkeley

When will things be back up?� Hrmmmmmm it has been the good part of a day since the site has been down.� I don’t know.� I have heard one day.� But Im not sure how fast those guys can get things fixed.� Manual repair of a screwed up fiber optic bundle may take some time.� I hope you all have a good cache of work units!

As you have gathered already….I can’t do the stats if I can’t access their website :/.� So the next update of the stats will be dependent on when they get their act together over there.

BTW….WTF were they doing operating a backhoe at 3:30 am????

�

February 17, 2001

Bunch o Stuff Today…
I have filed away a lot of stuff newswise over the past couple of days that are kind of interesting…and I think I am going to dump them all on you now 🙂

What Would A Signal From ET Look Like?
Since we all are doing the searching…just what exactly are we looking for?� Would we know from looking at our results file what would be a signal from ET or just junk?� That is a question one person make on the SETI newsgroups last week.� We had an answer from David Woolley on the group.� I guess it is a good enough explanation since Eric Korpela asked him if he could post his response on their website :)� (sorry that this may be a bit long).

OOPS….ok it was WAY TOO LONG…..so I posted the whole bit on the Ars Distributed Forums.� Check it out there!

Inverted Gaussians?
There was a question also on the SETI newsgroups that wanted to explain what they called “inverted gaussians”� The “inverted gaussian” is what would be shown on a GUI screen like the one shown in this thread here (EDIT: Links to pics go bewm).� Why would there be a valley in the data there?� Is it ET?� Na not quite.� Eric Korpela gives an explanation of the phenomena:

>I had a strange looking work unit a week or so ago. I happened to notice
>a “valley” running through it.

This actually happens in one out of every 256 work units.� The unit you processed had a base frequency of 1420 MHz.� Because this is the center frequency of our data analysis system (the DC bin in electronics speak), any imbalance in our electronics would show up as a signal at this frequency. In practice, no matter how well you tune things you’ll still get strong signals in this bin.� To avoid having problems we filter out a region centered on this frequency.� The valley you are seeing is the effect of the filter on the background noise.

Something to Consider
A response to the article on the S@H Cheating on the newsgroups brought out this post from Phil Weldon.

Sounds like a quote from someone trying to protect his turf, a projected commercial distributed computing enterprise.� Here’s another line from the article:

“As well as heading up Seti@home, Anderson is the CTO of United Devices, a
startup planning to turn a profit from distributed computing.”

If you go to the United Devices page http://www.uniteddevices.com/about/management.htm there’s Anderson.

I think I am going to beef up MY security to make sure United Devices does not slip in a few for-profit work units.� In fact, the whole idea is so damaging to SETI@home, Anderson should either be severed or SETI@home will lose all credibility, not from the outside, but from within.

There has definitely been a lack of full disclosure, and in my opinion, Anderson shows the ethics of a round worm.� His lack of honesty places him several rungs lower than “Ollie” and the recent fakers.� It is also disappointing that the members of the SETI@home team who have put in so much work on the project and posted so often to this newsgroup have participated in the cover-up.� UCB’s reputation is tarred with the same brush.

“It’s the economy, stupid, not the science.”� So it was competition first, all along.� We were just naive to believe that it was the little guys who were putting science second.� I wonder how the sponsors feel about their contributions to a start-up company?�

I do have to say that I have been quite wary of anything I hear from David Anderson.� For someone who is supposed to be the head of the SETI@Home project, it seems that his mind is somewhere else.� I do tend to believe the people like Eric and Matt who deal with the project on a day to day basis.� I have to take David’s comments with a grain of salt since when he talks about the S@H project, he may have other future plans in his mind instead.� They have learned alot of things while working on the S@H project, heck it is one of the first and it is the biggest distributed project, but sometimes you do have to wonder where the director’s mind is when he is trying to develop a for profit distributed endeavor.

S@H Stats Glitches
For the past couple of weeks many of the top teams pages on the Berkeley site had not been updated, and even the daily totals pages on the site were not getting updated.� Unfortunately people like me think that they are on top of things over there and are working on things….but I guess that is not necessarily what actually happens.� Matt Lebofsky chimed in on some questions about the updating of the pages on their site a couple of days ago.

Sorry sorry sorry.. There’s this weirdness in the team stats script that I just fixed and am running the team stats program currently. In about 12-24 hours, y’all will have freshly updated team pages.

There’s no reason for these bugs to go unnoticed – I think it’s become such a fact of life that it can go weeks before there’s noticeable mention in the newsgroups.

I’m writing a new cronjob now that checks the last update time of the teams pages and warns me right away if things are a couple of getting way out of date. Maybe now I’ll act on this faster.

(The team stats page generator runs once a day under normal circumstances).

The pages in question seem to be updating normally now.� YAY….

Also related to the stats pages on the Berkeley site…there was question about the change in definition of an “active” user on the site.� Since they have changed the active user definition, there is STILL a downward trend in the number of active users.� To me this sort of points out to alot of people basically quitting the project altogether.� I guess we will not know for sure….but with the 3.03 release being at the beginning of the month I should wait for 28 days to pass before saying for sure.� But I have a feeling that graph will not recover.� I also had posted a message noting that some of the graphs that used to be on the Berkeley pages were gone, but I was pointed that they still do exist, even though they aren’t listed on the pages.� If you are interested, you can check out the other graph pages here and here.

See The Shuttle
I have been quite weary of most types of “streaming media” that I have taken a look at.� Most of the streaming media has been quite poor quality, and if it claimed to have high throughput/quality, the connections were never that good, and the actual throughput did not live up to what was claimed.� For most of the streams I had tried, they used Real Media (which I hate), but finally I found something worthy enough to watch, PLUS it gives a good connection stream.� (BTW – I have a pretty good cable modem connection here).� So what is this great thing?� It is NASA TV.� From the NASA site, they list various different outlets for live streams of NASA TV, but most of them use Real Media streams.� Then I ran across the Yahoo/Broadcast.com page for NASA TV.� This page gives you several different stream choices to view NASA TV through Windows Media Player.� I fired up the 300K stream and it is pretty darn good.� No problems with network congestion, and I have been getting a constant 300K connection on my machine.� The picture is pretty clear, even at double the default size.� It even looks pretty good full screen on my 20″ monitor.

For the past couple of days I have looked in at the current Space Shuttle Mission, and there is something going on all the time, even if it is only a showing of where the Shuttle is positioned above the Earth.� You need to look quickly, the shuttle is planned to land sometime tomorrow morning.� I believe that they are going to broadcast the landing live tomorrow, and I will be there to watch it :).� If you are not particular to Windows Media Player, they have different options available on this page (I hope that link is right).� Unfortunately it opens the stream in a html window with a bunch of other junk.� I would rather get the stream address and then enter it in a stand alone media player.

Getting NEAR to Eros
The NEAR (Near Earth Asteroid Rendezvous) spacecraft had been circling the asteroid Eros ever since Feb. 14, 2000.� The spacecraft was nearing the end of its lifetime, and funding for the program was running out.� What to do with the spacecraft?� Heck lets try to land the darn thing on an asteroid.� That is just what they did on Feb 12th.� They did not expect the craft to survive the landing, but it did.� The craft landed on Eros traveling about 5mph at impact.� This landing has given the craft a new lease on life, although it may be a short one.� The mission planned to end this past Wednesday, but they will give it at least another 10 days.� They are going to try to fire up the crafts Gamma Ray Spectrometer, and also its magnetometer in effort to get more data on the asteroid.� They are hoping to get some data back from the craft sometime tomorrow.� You can check out space.com for some more info on this mission.

�

February 16, 2001

Wired Article on S@H Cheating
There was an article posted on Wired yesterday which discusses the recent episode of the hacked S@H clients, and other cheating in the project.� Unfortunately, I think they were exaggerating the episodes of cheating in the project and resources used to deal with the project’s “security”.� Probably the most quoted part of the article will be this one by David Anderson, director of the S@H project:

“Fifty percent of the project’s resources have been spent dealing with security problems”

That sounds quite bad doesn’t it?� Following the project for the past couple of years, there are only two episodes I can remember of people using modified/hacked clients.� There have been a smattering of people who originally tried to fake results so they could get their names in the “top spike, or top gaussian” tables, but they quit putting names on them so that had subsided.�

Getting back to the above quote, the “Fifty percent of the project’s resources” part is a bit misleading.� He makes it sound like they are spending hours and alot of server power to combat malicious cheating in the project but I don’t think that is what he is referring to.� What I believe he is talking about is the way their servers and the handling of the work units and results.� To make sure they have valid results, they need to send out the same work unit to several different members, and receive at least 2-3 results for this same work unit for verification.� The 50% he is really talking about is dealing with this redundancy.� They have to keep work units around on their servers until they receive 2 or more results for that work unit, plus they have to keep multiple results from the work unit on their servers.� This would easily account for the 50% he quotes.

Another reason why I don’t think that cheating is as rampant as the article describes, is due to the responses from the S@H foot soldiers that frequent the SETI newsgroups (alt.sci.seti and sci.astro.seti).� As I mentioned in the Feb 3 news earlier, Eric Korpela was answering some questions on cheating etc,� and again here are a couple of his quotes:

“The cheating percentage is currently tiny.� Out of the last 400000 results, I don’t see any evidence that anyone was using the patched client.”

“The fraction of results that fail (redundancy checks) is pretty tiny.� The fraction that have false redundancy is even tinier.”

Hrmmmmmmm….does that sound like they are overly concerned about cheating?

Of course, if you read the article there are some quotes from a familiar name.� Caesar has some comments about the recent hacking problem and gives a little insight on the people that were involved.� Check it out for yourself.

�

February 14, 2001

Ug.
Man has it been a week since I have updated this page?� Sorry.� I am suffering from news writing block I guess.� That plus slowly trying to redesign the way I am doing the stats.� It isn’t a complete redesign, but trying to make it easier on my end of things.� Unfortunately getting in the mood to write some VBA code is a tough task :).� Hopefully I can get it done REAL SOON LIKE!

�

February 7, 2001

In Case You Missed It…
The S@H guys weren’t able to complete all of their upgrades on Monday, so they did them Wednesday instead.� Here is a synopsis from them on the matter:

February 7, 2001
We had a planned outage this morning for two hours to rearrange the server closet in order to make room for a new database system. We took this opportunity to replace one of the old RAID cards on the science database server with a new one, as well as clean up the terrible nest of power/ethernet/scsi/serial cables behind the server table.

Are You An “Active” User?
I am pretty sure most of you consider yourselves active users…But just how do you define an “active” user?� The S@H guys had defined an active user as a user that had returned a result in the past 2 weeks.� Ever since the middle of January there had been a steady drop off of these active users shown on this page.� The drop off I believe coincides with the time that they started the upgrade warnings for the pre version 3.03 clients.� What caused this decline?� Was it people stopping crunching? or was it due to the longer times for the version 3.03 clients that people upgraded to?� I tend to believe the former…but it looks like the Berkeley guys changed the definition on us.� I noticed a sharp spike� in the active users graph a couple of days ago, and today there is an explanation why there was a spike:

Note: On February 6, 2001, we decided to change the definition of “active user” from a user who has returned at least one result in the past 14 days to a user who has returned at least one result in the past 28 days. This is because version 3.03, which became mandatory on February 1, 2001, takes significantly longer to process, and result times began falling beyond the 14-day window.

The explanation sounds plausible, but then when you think about it…one result in 28 days?� Would you consider 12 work units completed in a year to be those of an “active” user?� But hey….I guess if you run the project you can create your own definitions.� Well what kind of difference did this make in the numbers of active users?� On Jan 19th there was about 540,000 active users (old definition) and that number had dropped off down to about 500,000 active users a couple of days ago.� On the definition change, there are now 590,000 active users.� They gained 90,000 active users based just on a definition change.�

I would actually like to see 4 graphs to gauge on the participation of S@H members.� Separate graphs showing the # of users sending in results in the last 1 day, 7 days, 14 days and 28 days.� I’m pretty sure that is a little too much to ask though :).� One other thing I noticed with the pages over there at Berkeley…They used to have a graph showing the # of results processed per day and the rate of change, but they are now gone.� I don’t know where they disappeared to, or why though…

Friendly Reminder From Matt
There was a couple of posts on alt.sci.seti saying that the S@H website was down.� But the site has been up during that time.� Why did the poster think the site was down?� He was trying to reach the S@H site using the url www.setiathome.com, but the problem is that site has nothing to do with the S@H project.� I believe that the page redirected to the correct url for S@H (http://setiathome.berkeley.edu).� Here is what Matt had to say:

Regarding the complaints that the� www.setiathome.com site is down, let me just point out:

www.setiathome.com (or .org, .net, or .ws for that matter) has absolutely nothing to do with our project. The only web site you should be referring to is: http://setiathome.berkeley.edu (or http://setiathome.ssl.berkeley.edu, which points to the same web server).

Any articles that advertise www.setiathome.com as our official web site are just plain wrong.

Woah
I know this is a little off topic here, but hey…We have had our share of snow up here in Michigan this winter, and there are still a couple of inches of the white stuff on the ground.� Imagine my surprise when just a second ago there are flashes of lightening and claps of thunder, and now it is starting to pour down rain!� Woah…

Just for Laughs…
First it was Stiledot, then came AOLTechnica.� If you don’t have BBSpot bookmarked, then there is something wrong with you :).� Brian “Don’t Call me Joe Bob” Briggs, has some of the funniest tech humor that I have seen on the web.� It doesn’t hurt that he is a fellow Ann Arborite also :).� You should check his latest creation the Slashdot Random Story Generator.� For example, here is one of the randomly generated stories:

�Pope John Paul II Discusses PalmOS
Posted by brian on Thu February 08, 10:05 PM
from the have fun hitting reload page dept.
smerfherder writes Salon has an interview with Pope John Paul II which previews the new version of PalmOS and its place in the current technological environment. �I was surprised to learn that Pope John Paul II had an early hand in the development of PalmOS. �Now if only PalmOS could help me with getting babes.” �Good read.

While you are there, check out the rest of the site.� There are loads of laughs in other stories, and if you look closely www.teamlambchop.com is linked up on the pages there also 🙂

Here is another link that is good for another good laugh.� Now….Just imagine Stephen Hawking with a kick ass rail shot.� What?� You can’t?� Maybe you will think otherwise after hearing this .mp3.� The .mp3 is called “Quake Master” in which Hawking is a Quake God.� Its good for a laugh, and it is a pretty decent tune also!� Check it out!� (Thank Steve over at [H]ardOCP for the link). WARNING:� I forgot to mention this when I originally posted this…this song has some strong language, and may not be appropriate for some.

�

February 3, 2001

Servers Down On Monday!
This is from the front page of the S@H Site:

Server Outage on Monday, February 5th: Starting at 10:00 PST (18:00 GMT), our server will be down for two hours in order to upgrade the science database hardware config and move a new system into the data closet.

Be Prepared!

Eric Korpela Chats It Up
There has been a pretty good discussion on the alt.sci.seti newsgroup over the past several days surrounding the fallout of the hacked client and various other things.� If you want an in depth look at the thread I suggest that you check out the “The patch (semi-official response)” for more.� I will outline several points in the discussion here though.

On the hacked clients:

The cheating percentage is currently tiny.� Out of the last 400000 results, I don’t see any evidence that anyone was using the patched client.�

On data verification:

>Why have you failed to do basic validation so far?� How many years now?

We are doing such validation.� However we lack the disk space to hold the workunits around to wait for results.� In general we get three or more results per workunit.� If 2/3rd or greater match, it’s considered valid.� We don’t require a 3 of 3 match.�
What we don’t do is update the user database to match the results of the redundancy checker.� The redundancy checker doesn’t run real time, but is run on a tape by tape basis after all we expect that all of the results for a given tape have been returned.� It’s also running quite a bit behind the times which is why we’re in process of setting up a separate database machine to hold the verified results.

Frankly, it’s not that important for the science to have the user database accurately reflect the results of the redundancy check.� The fraction of results that fail (for reasons other than version differences) is pretty tiny.� The fraction that have false redundancy is even tinier.

I guess this is something that isn’t really known by many.� They do try to verify the results, but if a work unit is rejected by the verification process, it shows no impact on the user side of things.� This may be difficult to do with their server setups right now.� Lets say there is a problem with one person sending hacked results.� The verification may kick out the results by checking to other results returned for that same work unit, but there currently isn’t any way to tie that back into the user that sent the bad result.

>Another problem with your current validation methods, whatever they are, is
>that you feel the need to keep them secret.� If they are good they will
>withstand public scrutiny.

The problem is not that they wouldn’t withstand public scrutiny. The problem is detailing the thresholds we use to detect cheating would provide information on how to cheat without being detected.� The precise criteria we use to decide if a null result is preferred over a non-null result for the same workunit would do the same thing

Of course telling the people what criteria you use for checking the results, may actually help those who want to subvert those checking procedures.

>IMO, Any security system that depends on
>secrets is not as sound as one that has been subjected to open review.

On the contrary, any security system provides directions on how to be undetectably bypassed during an open review process is worse than one that depends upon secrets.� There’s no way to have trusted communication with someone you don’t trust.� The best you can do is determine who you trust and who you don’t and not let anyone know to which group they belong.

Finally, here are a couple of interesting things…

>This would never have been an issue in the
>first place, if Berkeley had put some simple controls into the server side
>of the operation.� Fix that and the possibility of that happening again is
>vanishingly small.

With enough server power and manpower we can implement a fix.� The question is a resource allocation question.�
Right now the drive here is to get more results on the web site ASAP.� I don’t control the direction of the project.� Right now I spend my time seeing that the post processing is get done correctly and that in our rush to get the result out we don’t sacrifice validity for expedience.� But this is not one of those cases

I think that more results will be good.� For the longest time so far there has been very little on the results of the work that has been going on.� It has been a good part of two years going in the project and there has been little on terms of results.� A few newsletters really doesn’t cut it.� Lets hope they get to it soon!

Matt Lebofsky Speaks Out Also
Many people have noticed that the group stats seemed to be down for a week or so.� The pages were basically blank, or didn’t get updated.

Yup. I fixed them. Not much to it this time – the wrapper script around the group stats program stopped working as expected. Now it’s fixed.

Sorry about that – These bugs take a long time to sort out since the program takes up to 24 hours to run, and I was out of town for five days last weekend.

Unfortunately it seems the group statistics are still only updated once a week.� I wish they did more than that, but I guess we can take what we can get.

I would like to chime in that the “view last 10 workunits” won’t be turned on once 3.03 is mandatory – the two server problems are quite separate. Releasing 3.03 will help reduce the load on the data server, but the web server will still see insane amounts of traffic.

Once we settled in with 3.03, we can focus our energies on getting the web site more interactive. This is a matter of splitting up the web transactions between various machines, which means setting up new machines, as well as working on streamlining the database, optimizing CGI code. We might play around with php or mod_perl, blah blah blah – all the good stuff that makes a web site more of a pleasure to use.

And another thing: About a week ago the “view last 10 workunits” was turned on for about 12-14 hours. This was a complete accident, and we realized this quickly as our web server crashed. Sorry about psyching people out..

Bummer….the last 10 WU thingy was the one real “science result” thing that the normal user could check on the site.� I just hope that they can get some other things up and running, and not cause too much problems on their servers.

Two Servers for Your IRC Pleasure!
Now you have two different IRC servers to choose from in trying to chat with your distributed project friends.� The two servers are now networked together, so if one goes down you can hop on the other� The two different servers are:� irc.kulish.com, and jtds.com.� Hop on IRC, connect to either server and join the channel #distributed, and enjoy the fun!� Updated:� My bad….the other server is just jtds.com not irc.jtds.com.

�

-Front Page