on Friday January 24, @11:16AM (#5151225)
It's no wonder these servers have so many problems - there's thirteen
of them! They need a lucky #14
by El_Smack (267329) on Friday January 24, @11:20AM (#5151246)
From the article:
"Researchers believe that many bad requests occur because organizations
have misconfigured packet filters and firewalls, security mechanisms
intended to restrict certain types of network traffic. When packet
filters and firewalls allow outgoing DNS queries, but block the
resulting incoming responses..."
It's nice to see a story with info I can take and use. This is
actually "stuff that matters".
Kudos to the researchers, and now I am off to check my firewall.
by swordboy (472941) on Friday January 24, @01:16PM (#5152200)
So you're trying to tell me that you've never accidentally typed
slashdot.elvis instead of slasdot.org?
by Anonymous Coward on Friday January 24, @11:31AM (#5151318)
Not if you notice the source, I have done work with sdsc and while they may have super computers, not many of the people there are super at using them.
by Anonymous Coward on Friday January 24, @11:18AM (#5151239)
Scientists at the San Diego Supercomputer Centerfound that 98% of the Slashdot comments at the root level are unnecessary.
by prockcore (543967) on Friday January 24, @01:05PM (#5152109)
About 70 percent of all the queries were either identical, or repeat
requests for addresses within the same domain. It is as if a telephone
user were dialing directory assistance to get the phone numbers of
certain businesses, and repeating the directory-assistance calls
again and again.
This is somewhat of an invalid metaphor for both the way dns works,
and the way computer caching works.
It's also a misinterpretation of the data. The duplicates aren't
all coming from the same domain, they're all looking up servers on
the same domain.
They're not redundant because they're all coming from different
servers. Those idiots at the university are going "look at all these
requests for slashdot.org! Talk about redundant!" not understanding
that that there are thousands of dns servers requesting the dns..
and DNS entries expire, so they cannot be cached forever.
by casmithva (3765) on Friday January 24, @12:43PM (#5151890)
So let me see if I'm getting this right. According to their article,
I've somehow misconfigured my nameserver if a query for slashdot.org
goes from my local nameserver to the root, then to a VeriSign gTLD
server, and then to a VA Software (or whatever y'all are known as
this year) nameserver? Funny, I thought that's how DNS was supposed
to work! I suppose they want us to go set up ~300 forward zones in
our nameservers to prevent these unnecessary queries...? Yeah, okay,
sure, I'll get right on that after lunch. *snicker*
by TerryAtWork (598364) on Friday January 24, @02:46PM (#5152817)
This is pathetic and typical of the UNIX community
who are not half as smart as they like to think they are.
SURELY in any application that deals with a ton of data, a maelstrom with which they can hardly keep up, the *first thing you do* is filter out every single possible malformed and nonsensical processing item so you don't have to process it.
And what you DON'T do is kick anything that doesn't make sense upstairs. What were they thinking?
And THIS - the fact that a DNS server - A DNS SERVER! - doesn't know that .elvis does not exist is CRIMINALLY NEGLIGENT. How hard is it to put the little text list file in every DNS server?
WHAT THE HELL WERE THEY THINKING????
Mod me down, take your best shot. JEEZ those Unix snots burn me up.