<<<
Chronological Index
>>> <<<
Thread Index
>>>
Re: [gnso-ff-pdp-may08] Choke points
- To: "gnso-ff-pdp-May08@xxxxxxxxx" <gnso-ff-pdp-May08@xxxxxxxxx>
- Subject: Re: [gnso-ff-pdp-may08] Choke points
- From: "George Kirikos" <fastflux@xxxxxxxx>
- Date: Thu, 7 Aug 2008 19:21:50 -0400
Hi Joe,
Thanks for an excellent post (I've read most of the archives, by the
way, and most of the papers you had posted already).
Would it be fair to say that a combination of factors are going to be
needed, rather than just one? Ideally it would be nice to know which
methods are causing the most damage, to target those first, or which
are the lowest hanging fruit to pick out of all the different
problems.
You hinted at that when you discussed signature-based malware, and how
it's become ineffective, as the malefactors simply can create a
different signature for malware at will, at no cost (and indeed scan
their own creations before they're unleashed to ensure that they can
slip through those scans successfully).
On Thu, Aug 7, 2008 at 5:32 PM, Joe St Sauver
> #They also need an identity (for mandatory WHOIS). Is that WHOIS completely
> #fake, or a stolen one, or a borrowed one of a legit registrant?
>
> Whois may be concealed by a whois privacy service, or be completely bogus,
> or stolen, etc., yes. Another interesting google search:
>
> bullet proof domain name registration
This is where things can be done. For example, once a domain is caught
to be conducting illegal activity (say via a privacy service), that's
when one can blacklist that privacy service from registering a domain
again (unless they cough up the real details of the responsible
parties). Ultimately, the privacy service is the registrant, if they
can't identify another responsible party.
As for fake/bogus WHOIS, as you mentioned below, that doesn't require
a "fast flux" policy --- domains can already be taken down for fake
WHOIS without creating a separate policy.
> http://ha.ckers.org/xss.html ), and the reality that the malware
> d00dz can repack and re-release faster than signature based A/V
> vendors can re-analyze/build new signatures/distribute new signatures,
> and I think malware will remain an ongoing threat for the dominant
> operating system for the forseeable future. Classic quote to show
> you just how bad things have gotten, malware-wise:
>
> "At the start of 2007, computer security firm F-Secure had about
> 250,000 malware signatures in its database, the result of almost
> 20 years of antivirus research. Now, near the end of 2007, the
> company has about 500,000 malware signatures."'We added as many
> detections this year as for the previous 20 years combined,' said
> Patrik Runald, security response managerat F-Secure.
Right, and ultimately with user-generated content, the attackers can
always infect MySpace pages, Facebook apps, Yahoo Geocities pages,
etc. You don't see them being taken down, though. A lot of spam comes
from Gmail, Hotmail, Yahoo, etc. as their captchas have all been
broken, but those domains don't get taken down --- i.e. some level of
"crime" is deemed to be acceptable.
> Let's assume that the CBL is probably the best currently available
> listing of bot'd hosts, typically listing in excess of 5,000,000 dotted
> quads at any given time.
How would ISP blocking of those hosts (or blocking through a browser)
cut down on the problem? (i.e. moving the solutions to the "edge" not
something central) What's the false positive rate for those lists, and
how quickly do they remove an innocent black listed entry?
> But keep in mind that many ISPs do NOT block http servers hosted in
> user space, so the point may be somewhat moot.
If it's a way to become a best practice (like port 25 blocking has
become), and is cheaper to implement with less collateral damage, then
I don't think it's something that should be discounted. It's not
something that ICANN can force upon ISPs, but they can use the bully
pulpit to make some noise (or those who are most damaged, banks, etc.,
can push for legislation or work with ISPs to fix the problem). If the
NY attorney general can scare ISPs to stop access to Usenet binaries
in the fight against child porn, one can probably find the right
levers and buttons to push to get them to block port 80 too (where
there's likely very little collateral damage).
> #Ok, say I give you my bank account number and password for
> #realbank.com, but that realbank.com is protected by a 2nd factor, e.g.
> #a password sent by SMS to my cell phone,
>
> Some folks have tried this approach. At least one successful attack has
> been demonstrated as being able to overcome it:
Right, I'm not saying any solution is going to be 100% effective. But,
if it cuts down the problem substantially, raising the bar high on
those who are able to defeat it, it means that they should do it (and
might be more cost effective, etc.).
> Hardware crypto tokens are nice, but are not a perfect solution for a
> few reasons. See, for example:
Right, not trying for a "perfect" solution, but one that is efficient,
makes it much harder for attackers. A MITM attack is probably going to
take a lot more sophistication than just somewhere one has a
username/password to get access. Make it harder than just something
point-and-click that a script kiddie can do.
> Money is normally extracted from compromised accounts by "cashiers,"
> who perform that service in exchange for a flat fee, or more typically
> a share of the funds extracted. See for example the discussion ion
> "An Inquiry Into the Nature and Causes of the Wealth of Internet
> Miscreants," http://www.icir.org/vern/papers/miscreant-wealth.ccs07.pdf
> at page two, right hand column near the bottom.
They'd didn't really give much detail:
"After purchasing credentials, the fraudster may employ the services
of a "cashier," a miscreant who specializes in the conversion of financial
credentials into funds. To perform their task, the cashiers may work
with a "confirmer," a miscreant who poses as the sender in a money
transfer using a stolen account."
One would think that there's not a huge supply of these people taking
the risk of getting caught. Why aren't they getting caught? (i.e. the
real money has to move out of the victim's account to someone else's
account, and the money has to end up somewhere.....keep following the
money)
> You may want to review the excellent "U.S. Money Laundering Threat
> Assessment," http://www.treas.gov/offices/enforcement/pdf/mlta.pdf
Some good quotes in that paper, like:
"The farther removed an individual or entity is from the bank, the
more difficult it is to verify the identity of the customer."
which have parallels to domain registrations.
> #Ok, when do they send out the spam, in relation to the domain
> #creation/registration date above? Immediately? What happens if that is
> #delayed for a week?
>
> Folks have noticed that spammers like to promptly exploit newly
> registered domains in the hope that by doing so, anti-spammers won't
> yet have listed the new domains on things like the SURBL or URIBL.
> Their ability to do so, however, is impacted by the existence of
> things like the "Day Old Bread" list, which lists domains registered
> within the last five days (see http://www.support-intelligence.com/dob/ ).
> DOB tests are incorporated in anti-spam products as popular as
> SpamAssassin (see http://spamassassin.apache.org/tests_3_2_x.html )
Hmmm, why is there any need for a "day old bread" list at all? Just
download the entire zone file (from 6 days ago), and if you see a
domain name that wasn't in it, treat it differently, quarantine it,
penalize it, etc. that'll capture nearly everything (except for a few
cases where a registrant of an old domain eliminated their nameservers
for a day)
So, once again, what % of spammers are doing it in the first 6 days?
It would really be nice to get some stats (from the anti-phishing or
malware or a list of domains that were taken down).
> #Are only companies opting not to use these email authentication methods
> #vulnerable?
>
> No, as mentioned just a second ago, even companies that do use these
> approaches can still run into issues with traffic from "look alike" domains.
Right, not looking for a perfect solution, just trying to whittle
things down, taking out various attack vectors. E.g. if a bank had a 6
day window to notice a newly registered fake8ank.com ("8" added as a
typo for "b" or to look alike), that would give them a big window to
stop an attack before it even started (if the DNS wouldn't resolve for
a certain number of days after the creation date). Registrars, humans,
brand monitoring agencies are pretty good at picking up lists of names
to complain about, once they get published in the zone files (they
just need a few days to work with registrars to get them shut down
before things start).
> #How does one convert that link to a visitor? i.e. educating people not
> #to click links can thwart this attack vector?
>
> There are vulnerabilities that do not require any active clicking to
> take place. :-(
Right. :( That's why I've been so vocal about things like DNS Cache
Poisoning, DNSSEC, etc. -- if even the most paranoid security people
can get taken down by those vulnerabilities (I don't run wireless, for
example, or other things that would make me vulnerable, so it's one
less thing to worry about; I use SSL where possible; but some things
you can't fight against).
> #Does having the email
> #client actually remove the link (by filtering the HTML/mesage source)
> #thwart the attack?
>
> Sanitizing potentially dangerous email constructs with something like
> Procmail Email Sanitizer (see
> http://www.impsec.org/email-tools/procmail-security.html ) can be
> very helpful, but note that by the time you're doing so, many HTML
> formatted messages are NOT going to look very pretty.
If that's the price to pay, I think that a lot of folks would be
willing to pay the price of "not pretty" HTML messages if those emails
are a lot more secure. If all banks could be made 100% safe by
painting them purple and green, that would become the new "black".
> That falls apart, however, if the bad guys hijack the user's DNS server
> settings on the user's PC, as discussed in "Port 53 Wars: Security of
> the Domain Name System and Thinking About DNSSEC,"
> http://www.uoregon.edu/~joe/port53wars/port53wars.pdf
If your DNS servers get hijacked, there's not much that can be done at
all by removing a domain from the zone, as the attacker can simply
point www.realbank.com to the IP address of their choice, and then
we're not talking about domain names anymore.
> #In other words,
> #is the problem solved by having the ISP choose not to resolve things
> #(i.e. a "clean" ISP that automatically filters out phishing domains,
> #or a clean set of nameservers like OpenDNS that actively filters out
> #phishing domains, or modern browsers that warn/filter out phishing
> #domains?)?
>
> No, for the reason just mentioned... you can't trust user DNS not to
> have been hijacked.
I disagree. As above, once an attacker has control of your DNS, they
can simply point www.yahoo.com or www.aol.com or www.ebay.com to the
IP address of their choice. They don't need any domain names at that
point.
> While some users enjoy extremely accurate spam filtering, others make
> do with a somewhat leakier umbrella, shall we say, and there's the
> constant tension between false positives and false negatives.
And some folks will give up their passwords if someone calls them up
on the telephone pretending to be a banker. We can't protect everyone
from their own stupidity.
> Actually, it pretty much is. Hate to say, and sure have looked at a lot
> of other options (as my commentary above may illustrate), but for fastflux,
> you really DO need registrar/registry cooperation.
I think they're part of the solution, but not the only part.
> #Why does it work? The attacker then creates 10,000 new domains, and
> #the process starts all over again.
>
> Processes which scale well are highly desirable. :-)
Indeed, that's what we're all trying to do. So, suppose .info, .com,
etc. say, as an experiment, that freshly created domains don't resolve
at all in the first 6 days. That scales well. What % of attacks would
that stop? Collateral damage is arguably minimal, as most folks don't
do much in those first few days (take a look at the number of parked
domains at GoDaddy, for example).
> The great system he's talking about implies the ability to get prompt
> action from registrars to take down these domains.
As long as folks are able to register and resolve domains faster than
registrars are able to take them down, then this issue would continue
to exist, unless there's a "cost" introduced that breaks that cycle.
> #why isn't every ISP using it instead, or why isn't every user opting
> #into it.
>
> One cannot use what doesn't exist (as an ISP); user's don't have the
> knowledge and technical ability to do so.
Free market, there's an opportunity. Certainly ISPs subscribe to spam
block lists, one would think it's not a huge technical leap to
anti-phishing blacklist (and to some extent already done via OpenDNS,
etc; I can block all .ru, .br, .badistan DNS from resolving with a
click).
> #Or if the system is "perfect", why aren't those who are making reports
> #willing to provide a huge bond against liability should they take down
> #a legitimate site by mistake?
>
> For the same reason that "Good Samaritan" laws are needed in most states
> to shelter public spirited individuals against malicious lawsuits or
> unforseeable misadventures in non-cyber situations.
I don't think that's a good enough answer. If I'm Google and I'm
opening myself up to the possibility of losing $100 million if a
registry operator goes above the wishes of the registrar, and shuts
down the domain "by mistake", someone's gotta pay.
> I would always assume that automated classification systems may occaisionally
> make mistakes. That's one reason I have consistently suggested the
> desirability of having human review of proposed decision making, and
> other checks and balances (this discussion took place before you joined;
> for backstory, see the archives at forum.icann.org/lists/gnso-ff-pdp-may08/ )
There was certainly human review of the ca.gov, and probably also for
the McAfee/Yahoo Google example too (i.e. I believe McAfee or one of
the competing services claims they manually check for the "top 30,000"
websites on the internet to reduce false positives).
See this report: http://blogs.zdnet.com/security/?p=1629
"60 percent of the top 100 most popular Web sites have either hosted
or been involved in malicious activity in the first half of 2008."
If those top 100 websites risk getting shutdown, one certainly needs
some form of bond before some accuser takes them down. (or some method
of whitelisting, but which should be scalable to more than just the
top 100 websites)
Little sites have a lot less weight to throw around to get off
mistaken blacklists:
http://www.thisistrue.com/blog-yahoo_alert_trues_biggest_crisis_ever.html
> Only a fraction of all losses are reported. The IC3 report, in particular,
> substantially over-represents auction related fraud for reasons they
> acknowledge and discuss, e.g., see page 40 of "A Succinct Cyber
Some folks have an incentive to overstate the problem, though. Some
real stats from Australia:
http://www.australianit.news.com.au/story/0,24897,23984660-15306,00.html
"Chris Hamilton, chief executive of the Australian Payments Clearing
Association - which runs the Eftpos network - said he was puzzled by
the ABS figure of $1billion.
According to its figures, fraud on locally issued cards reached $111.5
million last year.
"I would have assumed that nearly all instances of card fraud that
come to the attention of the individual consumer would be reported to
their financial institution," he said.
"That's why we think our fraud data is probably pretty good there
wouldn't be a lot of under-reporting."
A factor of 10 difference, just in one country's reporting. If the
USA's economy is 10x bigger than Australia's, that would make fraud on
US cards be on the order of USD $1.1 billion/yr.
> The vast majority of the hundreds of billions in losses are associated
> with losses of intellectual property, including things as diverse as
> running shoes, prescription drugs, luxury goods such as watches and
> designer handbags, copyrighted music and movies, pirated software, etc.
Right, like the peasant in Cambodia where a damage of "$500" might be
claimed for a "lost sale" of MS Office, or the poor teenager who
downloads $20,000 "worth" of music, but obviously couldn't afford if
they had to pay for it anyway. One should be very careful in
attributing financial losses to those kinds of things -- one has to
look at what the level of demand would be without piracy (and it
probably wouldn't have made much of a difference).
And now that you mentioned above where the big losses are coming from,
i.e. are sites like Pirate Bay, which have been online for years (and
have fought things through the courts) susceptible to being shut down
by some central authority if they decide to start using fast flux to
have resilient hosting?
> Because there isn't, that's why some registrars get hammered with court
> orders currently -- but THAT's an unweildy and expensive process if
> there ever was one.
The price we pay in a civilized society when *private parties* want to
settle disputes.
> The fastflux domains really jump out at you once you start to look
> at them, truly, and if you're worried about false positives, things
> like bogus/incomplete whois data along with criminal activity on the
> domain itself reduces inhibition to action quite rapidly.
I'm sure they do jump out and most become "obvious" -- there should be
no worries about posting a "bond" for those cases. And, there's
already a policy for takedown on bad WHOIS (or just needs to be
enforced).
"Criminal" is in the eye of the beholder. There are varying national,
state, local laws. It's still illegal to spit in public Toronto:
http://olympics.thestar.com/2008/article/472055
Private enforcement and interpretation of public laws is a very
dangerous and slippery slope (leads to vigilantism).
Sincerely,
George Kirikos
www.LEAP.com
<<<
Chronological Index
>>> <<<
Thread Index
>>>
|