ICANN ICANN Email List Archives

[gnso-ff-pdp-may08]


<<< Chronological Index >>>    <<< Thread Index >>>

RE: [gnso-ff-pdp-may08] Back to work...

  • To: joe@xxxxxxxxxxxxxxxxxx
  • Subject: RE: [gnso-ff-pdp-may08] Back to work...
  • From: "Mike O'Connor" <mike@xxxxxxxxxx>
  • Date: Sun, 20 Jul 2008 15:08:31 -0500


At 10:54 AM 7/19/2008, Joe St Sauver wrote:
#Meanwhile, a picture came into my head this morning over coffee and I
#thought I'd share it with you.  Here's a link;
#
# https://st.icann.org/pdp-wg-ff/index.cgi?mikes_infoenginev1

You are quite the diagramatic artist, sir!

just to put the final nail in the "Mike is a business puke, not a geek" coffin, let me point out that i did that in Powerpoint, not Visio...


I'm also impressed that you instinctively sensed that we researchers are,
as a breed, slow and thick, sort of the Newfoundland dogs of the network
world. :-) Ah, but to be an ectomorphic Saluki hound some day, instead. :-)

ah. should have been more clear. "Researcher" is a program/system, not a person. sorry about that.

"thick" is meant to be shorthand for "chock full o'logic" (ala thick-client vs thin-client).

"slow" is shorthand for "using TCP rather than UDP"


#Observations;
#
#- This is purely optional for all parties.  ISPs *can* add a
#"Verifier" box to their rack if they want.  Registrars/Registries
#wouldn't have to change anything.
#
#- The cycle of delivering a DNS response to an end user would remain
#speedy/scaleable
#
#- The ISPs existing DNS server would be untouched (it just wouldn't
#get quite as many requests)
#
#- The "Researcher" could go anywhere and could be as thick and slow
#as needed to determine the "fluxiness" of an address

My counter-observations:

-- I wouldn't assume that at least some DNS-based mitigation at the
   ISP level isn't already happening, nor that if it is, that it would
   fully or even partially eliminate the need for action by other
   parts of the community

just a start... maybe it's an add-on to what's already happening? i was mostly trying to sidestep some of the technical issues that arose during email conversations;

- Steve's realization about subdomains
- Marc's need for speed
- Steve's idea that maybe we can someday drive a change into "core" DNS (thinking of this almost as a testbed)
- Etc.


-- While this sort of thing may be optional from ISP to ISP, I suspect
   that if you're at an ISP which did implement this filtering, they
   would NOT be very likely to make it easy for you as a customer to
   electively opt to participate, or to opt out.

if i were running a recursive DNS (ISP or corporate), i'd probably just list Verifier on the support website as the main/default DNS server address, but offer access to the "unfiltered" DNS server for those customers who want to bypass it (maybe in real small type, with suitable dire warnings, etc.).

if i were writing the "Verifier" code, i might put knobs and dials in there for customers to twiddle. again, default settings with overrides available for people who want to change the criteria. a lot like spam-filtering options. regular or extra-crispy, you (customer) get to choose.



-- Manual research efforts probably don't need to drive the filtering;
   fastfluxing host can easily be identified on an automated basis, and
   that automation will be needed if the fastflux phenomenon becomes
   widespread.

yup. see above -- i'm not thinking of manual research here. rather, the "Researcher" program would be built to determine the fluxiness of an address (probably in several dimensions) and pass those conclusions on up to the blacklist as attributes. those are the attributes that the Verifier could use to determine which addresses to block (based on local settings). hopefully that "fluxiness" description/data-model could change quickly as the community gets better at it and/or bad-actors change their tactics.

Researcher (Researchomatic?) could be pretty complex, maybe even owned/run by a company that is a Registrar so that it could get at various automated interfaces. it wouldn't care so much that WHOIS is slow/TCP because Researcher wouldn't be in the real-time response role -- that's Verifiers's job.

Researcher (and Verifier too, i guess) might benefit from some Crockerian TTL/token/wait logic. Verifier wouldn't have to change its mind about passing/blocking until TTL had expired, and it similarly wouldn't have to echo DNS requests up to Researcher either. same deal with Researcher. it wouldn't have to constantly re-check domains until TTL had expired. actually, both of them could get quite clever about which addresses to echo/check.


-- Expect false positives, until the good guys figure out what's needed
   on a technical basis, and white lists get incrementally built to
   prevent accidents or malicious mis-listings of innocent IPs.

i think TTL, woven into the fabric of the two programs, could address this. and spam-filtering folks have gotten pretty good at fixing false-positives quickly. presumably these are the people who'd be writing these systems. i'm clueless about this, except being on the receiving end of false-positives every once in a while. it used to be a hassle to get off the blacklist, it's a lot easier now.


-- Inserting middleboxes in the network data stream (whether these are
   P2P-focussed traffic shapers, hardware anti-spam appliances,
   antivirus scanners, firewalls/NAT boxes, or your hypothetical DNS
   appliance) all result in progressive loss of network transparency,
   increased costs for network buildouts, and non-coherence from site
   to site as different sites end up rewriting, or blocking (or not
   blocking) different entities.

there are a bunch of choices to be made. maybe Verifier isn't a separate box, maybe it is. maybe some implementations of DNS have Verifier added on. maybe someday all implementations of DNS have something like this added on. maybe there's a community-wide shared white/black list, maybe it's supplied by many different vendors (ala spamblocking).

one of the reasons for Verifier is to get around the rock that Steve Crocker stubbed his toe on -- subdomains. another reason for Verifier is to separate the DNS-logging function so that a DNS-operator could participate without having to forklift in new DNS. but as a former ISP, i'm ok putting stuff in the rack (and supporting it) if it does a Good Thing for the customers. or me...


   And can you imagine trying to get your DNS *UN*-blocked at a million
   different sites if there was a false positive? Heck, how would you
   even know to ask to be unblocked?

again, spam-blockers have gotten a lot better at this -- i would hope for similar goodness from the folks that took this on. one could use this to lobby in favor of a shared list -- and a clear path to being removed from the list. once off, all the little Verifiers in the world would get the message pretty quickly (no later than TTL).


-- Even if nothing else bothers you about this, don't be surprised when
   front line customer support staff can't figure out what the heck a
   customer is running into when something doesn't work.

seems like there are a few ways for customer-support people to be trained. use the Verifier-enabled DNS address to look up an address, then use the real DNS box. if DNS sees it, and Verifier-DNS doesn't, you've got your answer. or go look up the address on the white/black list.

customer support's been dealing with spam-blockers for a long time -- i would guess that they could pretty quickly be brought up to speed on this one too. especially if Verifier had a few tools to help them built into it.

"go ask Verifier!"


   And note: one call to customer service is usually assumed to
   completely destroy all profitability associated with a customer for
   period of what may be multiple years.

um...  that's a two-beer debate.


-- Even if 50% (or 90% or 95% or insert your favorite value here)
   of ISPs elect to clean their DNS data stream, just as some large
   fraction of ISPs filter their incoming email stream, this does
   not mean that fastflux abusers will be deter'd, any more than
   spammers are detered by spam filtering. For example, in the case
   of spam, the spammers simply hammer the unprotected ISPs all the
   more heavily (bummer if you can't afford spam filtering or the
   extra capacity required, folks), compromise additional hosts more
   aggressively in an effort to replace no longer useful hosts that
   have been block listed, etc.

i don't really have a response to this. what i'm describing is an information-based approach -- the more info the better, the more participants the better. i think even if only 5% of ISPs and corporate DNS-operators participated at the outset, we'd have a lot more info about what's going on out there. if this can save operator money, make them more nimble, improve the quality of their service or make them more revenue (insert clever list here) they might be motivated to adopt. if it doesn't do any of those things, then it deserves to die.


-- We can add this incredible additional complexity with boxes and
   arrows going all over the place, at every ISP everywhere in the
   world, or the community can deal with its criminal customers whose
   business model relies on hosting their content on clandestinely
   bot'd consumer PCs at the one choke point that exists, namely
   the registrars.

ah... that one goes into the "information vs policy-based solutions" or the "fix problem at core or edge" debate, no?

thanks for the comments!

m




<<< Chronological Index >>>    <<< Thread Index >>>

Privacy Policy | Terms of Service | Cookies Policy