Archives

July 2001

�

LAMB CHOP HOME

JOIN TEAM LAMB CHOP

TEAM STATISTICS

MEMBER CHARTS

MEMBER PROJECTED

MEMBER GRAPHS

ACTIVE MEMBERS

MEMBER OTHER

OVERALL CHARTS

OVERALL GRAPHS

CHART LEGEND

SETI BENCHMARKING

SETI TIPS, FAQs, et. al.

ARCHIVES

PUBLIC QUEUES

ARS SUPERCOMPUTER

SETI@HOME PAGE

ARS DISTRIBUTED FORUM

TEAM BEEF ROAST

TEAM CHILI PEPPER

TEAM CRAB CAKE

TEAM EGG ROLL

TEAM FROZEN YOGURT

TEAM PRIMORDIAL SOUP

TEAM PRIME RIB

TEAM STIR FRY

TEAM VODKA MARTINI

THE SUSHI BAR

ARS TECHNICA

LINKAGE

�

PERSONAL STATS LOOKUP:
SETI@Home ACCOUNT:

COMMENTS? EMAIL: WEBMASTER
(remove NO.SPAM)

Mad props go out to IronBits for hosting the site and all the others who have kept this site going for the past couple of years!.

July 24, 2001

SirCam
Over the past several years there have been a proliferation of worms and virii that have hit the web.� Over this time I had never even received (to my knowledge) any type of email that contained an virus/worm/trojan.� I may actually have been sent them, but probably wrote them off as spam…plus I never ever open up any type of attachments that I didn’t know were being forwarded to me.�
Hurry up and start winning with 25 euro casino bonus at our casino. Limited supply!

Today, I guess my virus streak was broken.� This is not going to be one of those “I got infected and everything is gone” posts…because that didn’t happen.� I received an email..saw the text, and knew immediately it was a SirCam email.� I saved the headers, took a look at the attachment name, and then deleted the email.� The strange thing about this was that I had never received an email from this person, nor did I know who this person was.� I guess you never know who has your email address in their address book :).� I sent this person a friendly email letting them know that they have been affected, gave them a link that described the virus and also a link showing them a download to get rid of the virus.� Just about an hour later, I received a second virus email from this person…this time with a different subject, and different attachment name.� Actually both the attachment name and subject of the email were the same.� The attachments both had extensions of the type XXXXXXXXXXXX.doc.lnk
Buy best baby toothbrush. Monitor your child’s dental health.

So, moral of the story, is to beware.� If you want to know more about this virus check out this link.� If you think you’re infected with the virus go to this page to download a program that will clean your system of the virus.
Don’t miss your chance, go to australian online casinos that accept paypal only here good luck awaits you!

They’re At It Again
If you remember from a couple of months ago, there was a group (or maybe just one person) who decided that they didn’t like the S@H project, and have tried several means of putting the project in a bad light.� The group regularly spam the alt.sci.seti newsgroup with FUD, and they also screwed with the S@H client to obtain some user emails from the user.sah files.� They then used these email addresses along with addresses pulled from the a.s.s and s.a.s newsgroups to spam those people with more of their FUD and other types of spam email.� Their FUD normally is along the lines that the project databases aren’t safe and easily “comprimised” or has been “hacked”.

They have been at it again in the past couple of days, they have spammed the newsgroups and people using the email addresses that they obtained earlier.� The email that they have sent out had the text:

Dear Seti@home user

Seti@home been once again exploited.
http://www.vnunet.com/News/1124058

Break out the champaign. Victory all over again

yea it may be a worm but again they need to check the definition of “exploited”.� These little kiddies make a bunch of claims about the insecure servers at Berkeley, but they have yet to show any proof of such supposed insecurity.� The only “exploit” that they ever have used was one that I could have done just by changing some info in the user.sah file the client spit out (before S@H changed the info that it contained).

Matt Lebofsky posted to the sci.astro.seti group today to refute their claims and lay some smack down on them:

Perhaps I should clear this up:

1) We were never “hacked” – somebody figured out how to tease e-mail addresses out of our server and only a small subset of them. In no way has any of our machines here at the lab been compromised, and this was quickly fixed and hasn’t been a problem since.

2) This latest “hack” has absolutely nothing to do with us or our security outside of the fact that when it installs on the host machine, among other things it tries to download and install SETI@home to run as a specific user. Last I checked this user gained very, very few results from this worm/virus. Anyway, it should be painfully obvious that this isn’t a hack into SETI@home at all. Y’all should be worrying about all your machines that run Windows (and allow this sort of behavior by default) more than anything else.

3) My guess is if you’ve been getting more spam e-mails, it is either because you have your e-mail posted on this newsgroup or elsewhere, or because *everybody in the world* has been getting more spam e-mails (possibly due to the collapse of ORBS).

– Matt – SETI@home

�

July 19, 2001

Redoin’ da Stats
Just want to drop a note on the reason for the stats redo for today.� The reason isn’t major, but I want to test out some tweaks on my spreadsheets.� The main reason for redoing the stats today is because the download of the top 200 teams didn’t complete.� I was able to download the stats for TLC, but all the rest of the teams for the top 200 didn’t download.� It looks like the entire SETI site may have been down for a while.� I am redoing things to get a complete set of stats.� The second reason is because of the new “user profiles” they have implemented.� There were several TLC members who submitted a user profile.� When the user has a profile it places a .gif and link next to their name.� In my stats pull, Excel decided to place the mouseover text right after the member name.� So the member name came out “Member User Profile” instead of just “Member”.� I am attempting to filter that our now.� The third reason was to fix the messup I found with the Work Units per day for the top 200 Teams.� I found the most likely cause for the screw up…and hopefully when the stats get finished running everything will be A-OK.

�

July 18, 2001

1 Day to 100K
A word of congratulations goes out to svdsinner.� The next update of the stats will show him passing the 100,000 work unit mark.� That is a perty good accomplishment!� That total sticks him into the #65 position overall in the S@H project.� More congratulations goes out to Zoso for being named the second moderator in the Ars Distributed Computing Forum.� It is a pretty good honor, and it is nice to have a bit of help in modding the forum ;).

Movin’ and Shakin’
There is one advantage to having a huge lead over the other teams.� Even if a team below starts posting some pretty good numbers, It will take them a long time to catch up!� Over the past couple of weeks SETI.Germany has been turning up the production.� They are currently producing just about as many work units as TLC is…but with the large lead we have over them…it would take them next to forever for them to catch up :).� In other notable team stuff…there are a couple of teams that are starting to bunch up in the top 10.� Even though Sun Microsystems had some notable defections and moved down to the #3 spot, they have recently taken back the #2 spot from SGI.� SGI’s #3 spot shouldn’t be safe for too much longer because Compaq is movin’ on up, and is set to pass SGI sometime within a month and a half.� Compaq should keep on keepin’ on and take over the #2 spot sometime thereafter.� Moving a little more down the line, Art Bell has had some more member purging and now they find their #6 spot in jeopardy because of a charge from The Knights Who Say Ni!.� KWSN may pass Art Bell up soon…but they don’t seem to be matching the production from Team MacAddict to move up further.

Something related to the graphs, I noticed that the graphs for the Top team WU/Day, and the WU/Day averages are kind of screwed up.� I see that my Excel spreadsheets aren’t right and they need to be fixed.� I will have to do that sometime soon!� Speaking of Excel charts…I have in the past week, moved over the statskeeping from my WIn98 machine over to my Win2K box.� The switchover has gone well and I hope it will stay that way.� I can’t say the same about my install of Win2K on my Win98 machine.� That sucked big time.� I should write an article on “the n00bie’s guide on how NOT to install Win2K”.� I think I have my install straightened out though…I am writing this on that machine!

News From Berkeley
Actually…there ISN’T really any news from Berkeley.� On the hardware side, there is really not too much to say about their servers, and that is actually a GOOD thing in light of the recent problems they have had.� Aside from the lack of hardware news…there are a couple of changes on the S@H site itself.� They have added a few things, the first is that they have officially released version 3.05 for Mac OS X.��

The next addition is “The SETI@Home User of the Day”.� The User of the Day selection seems to have a different criteria than the Cruncher of the Week.� The Cruncher is a randomly selected person from all active S@H crunchers, but to be chosen for the User of the Day a person needs to submit a user profile.� “Exceptional user profiles will be chosen as “User of the Day” and shown on the front page”.� If you want to submit a user profile, you can submit one here.�

The last improvement (well I think it is new!), is the availability of downloadable “User Certificates”.� Members can download the certificates from their user statistics pages.� They should show up at the bottom of the user’s statistics pages.� It looks like they come in some different flavors, 100 WU, 1000 WU, 10000WU, and 100000 WU Certificates. Check them out.

�

July 9, 2001

TLC Back On-Line!
YAY!� One headache has passed.� I want to apologize for all the problems we had with access to the teamlambchop.com address in the past month or so.� I guess you can chalk it all up to several things:� 1) A lack of understanding of what was going on.� 2) lack of persistence on my end on demanding about what was going on and 3) Lack of direct and immediate control over the domain name and not sure of the process of getting things straightened out.� Before I get into some details about some things, I do want to say that Jason of Nimbus Networks has been a great help for me and Team Lamb Chop over the past year plus.� He has hosted the TLC site at no charge during that time.� Even though there has been minor problems here and there, for the most part the site has been hosted reliably and on some speedy servers and internet connections.

The recent problems surfaced mainly because of the dot.com crash and because of that Austin, TX which once was a budding internet hotbed, fell by the wayside along with may other dot.com companies.� Somewhere in the past month or so Jason decided to move shop to the San Francisco area, and that is where the troubles began.� Some time within the past month and half the normal system that Nimbus Networks was running on was dismantled, and it became a “home shop” (most likely in preparation for the move).�� Unfortunately this goofed up the DNS servers during this time.� The primary DNS server wasn’t running, but the secondary DNS server was up and running.� The problem was the secondary DNS server was not “seen” to the entire world through some upstream connection or what not.� Most of the world could not see the site through the teamlambchop.com address because of this.� BUT, I actually could read the site fine on my end (some other could also).� Unfortunately, I thought things would get straightened out with the upstream ISP (which never did) and Jason assured me that things were set up correctly on his end.�

Sometime over a week ago Jason switched the DNS pointer for the TLC domain over to Hagabard’s mirror site, but again people couldn’t access the site because the DNS server was “hidden” to most of the world.� (Again, I could access it fine though).� Things went completely dark for the TLC domain just about a week ago when Jason took all of his servers offline so he could move things to California.�

I have to admit, that I really didn’t know much about how DNS works, what computer does what and what is needed to keep a site online.� Now I know a heck of alot more than I did a month or so ago :).� I quickly realized that the way things were set up with the domain registration, I could not make any quick adjustments, and I needed some control over the domain name.� Up until last week, I was listed as the registrant for the TLC domain name, but Jason was the Administrative, Technical and Billing Contact.� In the past week I have gotten this changed.�

A bit into the problems, Hagabard offered his server for a mirror site for the TLC site, at tlc.hagabard.com.� This has been up and running for a while and I thank him for the space on the server!� In the past week I have gotten myself listed as the Administrative Contact, and when that went through I put in a change of both the Registrant and DNS service over to easyDNS.com.� The DNS service changeover was easy and painless.� It took only maybe two hours until everything was setup and confirmed.� The next morning (Sat Jan 7th) the information was changed in the Whois Database….and over the next several hours the teamlambchop.com site was available to many people again (it finally propagated to me sometime Sunday afternoon).� The change of registrant is still in the works (I don’t think NSI works over the weekend ;).

Chalk it up to a lack of communication and understanding of was was going on and needed to be done.� Right now the teamlambchop.com domain is pointed to Hagabard’s servers….so in effect typing in tlc.hagabard.com and teamlambchop.com will point you to the same site.� Right now things are still up in the air concerning hosting on Nimbus Networks.� Jason said he would still be happy to host the TLC site, but it may be a little bit before he is up and running in California.� Guess we will have to wait and see on that end of things.

Changes at Ars Technica
Along with the dot.com bust, there also has been an advertising bust going on.� Running a hugely successful and popular website on advertising revenues alone is becoming harder and harder to do.� Several years ago when Ars Technica started up, advertising revenues alone were enough to keep a medium to large website up and running, with maybe some cash left over.� This has changed.� These days the advertising revenue is somewhere between 20 to 100 times less than it was a couple of years ago.� Luckily Caesar and the gang over at Ars saved up alot of that extra cash to help weather out the advertising slump.� Unfortunately the slump continues and has never rebounded.� The money they had saved has run out and they had actually gotten personal loans to keep the site up and running.�

If things didn’t change, there was the possibility of severe cutbacks in the Ars site, most likely getting rid of several of the forums on the site.� If that didn’t help, there was also the possibility of the site shutting down completely by the end of the summer.� To help alleviate the financial situation over at Ars, this weekend they have unveiled a Premium Membership plan for the site.� The membership is not meant to limit access to the content of the main Ars Technica site, but it is a “value added” subscription model.� All information on the main site is available, as well as the technical forums on the Ars Forums (Including the Distributed Computing forum).� There may (or may not) be restricted posting access to the non technical forums, but that has not been determined as of yet.� For the latest information I suggest you check out the Membership page listed above, and there are some discussions about the plan(s) in the forum Lounge and Feedback forums.�

I am not writing this trying to force anyone to go subscribe, but I want to point out the option to everyone.� Many of you may not think the $5/month they are asking is worth the added “value” that they are offering.� I personally dont think of it in that way.� To me my $5/month is going towards keeping Ars up and running.� I value Ars Technica as a service, and I would hate to see one of the independent and honest hardware sites on the net to disappear.� Yes…I know that Caesar et.al. could sign on with an advertising company that paid more than they do with the one they have now.� But they have the integrity NOT to sell out to corporate entities for exclusive sponsorship, nor do they want invasive and intrusive advertising that will irritate the reader to no end.� As it turns out the pop up, pop under, interstitial, and other advertisements that take over the browser are the ones that pay the most money.

I believe that Ars Technica has been a tremendous help for me in the past two plus years of reading the site.� The site and forums have helped me in shaping my choices for the computers I have built, and the home networking I have going.� The forums has been a great recruiting tool for Team Lamb Chop, and distributed computing has a home in the forums now.� Needless to say without Ars, There would be no Team Lamb Chop site!� It is a great community and a great site, and I have did my part to help the site out.� :).

News From Berkeley
There is a new statistic that showed up on the account summary page called “CPU Time to Radio Signal Time Ratio” (well i just looked, and it has disappeared) This really was just another way to express the Average CPU time per work unit.� Eric K. sort of explains:

>”SETI@home data tapes from the Arecibo telescope are divided into small
>”work units” as follows: the 2.5 MHz bandwidth data is first broken down
>into 256 sub-bands by means of a 2048 point fast Fourier transform (FFT) and
>256 eight point inverse transforms. Each work unit consists of 107 seconds
>of data from a given 9,765 Hz sub-band.”
>
>Given that, I’m still lost in space.� I think that means that it takes 256
>work units to do 107 seconds of “real time data” since it’s broken into
>sub-bands before they send it out (by complex means – I really don’t get).
>Thank goodness the processing isn’t that complicated.� 500,000 active users?
>Somehow I doubt we’re that evolved as a species 🙂

We had a bit of a debate here about how to calculate the “signal time”.� Eric H. was of the opinion that we should use 107 seconds times the number of results because 107/256 would make people discouraged.� I was of the opinion that 107/256 seconds times the number of workunits was the best number to use because it really represents the amount of the whole data stream (2.5 MHz) that has really been processed.� I also like the lower number because it really shows the difficulty of the task.� I, for one, am not discouraged that my 2207 results represent only 926 seconds of data as recorded at Arecibo.� If it were really 2 3/4 days of data, why would we need half a million people analyzing the data

The info isn’t on the account summary anymore…maybe it will turn up later.� If it does, then now you know what it means :).

In the same thread there were several requests for some other stats for the site…and Eric responds again:

>How about a trend line of the # of WU’s that have been RC’d?
>(redundency checked)

Good idea.� Right now the number is 49,591,949 which puts us 52.8% of the way to caught up.� RC’s are turned off today to build some indices� needed for persistence checking.�

Some more of Eric is in a thread talking about the stats.� Some people were wondering why they would make progress in the rankings for a while, then while doing the same amount of work units they would get knocked back down the rankings quite a bit.� There is an explanation for that:

The reason you are seeing the effect you describe is that we only update the ranking stats every couple of days.� Between the times we update the ranking list, every workunit you return will seem to advance you.� When we update the ranking list, you drop back to your true ranking.

It’s not the prettiest method, our database isn’t fast enough to generate the ranking table realtime without showing everything else down too much.

Finally….
What about the “View Recently Completed Workunits” page?

>When is the “View Recently Completed Workunits” feature going to work? Till
>now all I get is “CGI illegal operation” or “This feature is currently
>disabled.”.

I wish I knew.� I would estimate that it will be back 2 weeks after we’ve solved all of the problems with our science database.� We’re still picking at some problems from the RAID card failure of May and June.� Could be 4 weeks from now.� Could be 2 months

�

-Front Page