NNSquad - Network Neutrality Squad
NNSquad Home Page
NNSquad Mailing List Information
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[ NNSquad ] Re: Richard Bennett's Take on uTorrent / UDP / VoIP
- To: nnsquad@nnsquad.org
- Subject: [ NNSquad ] Re: Richard Bennett's Take on uTorrent / UDP / VoIP
- From: Rahul Tongia <tongia@cmu.edu>
- Date: Mon, 01 Dec 2008 23:05:49 -0500
- Cc: Lauren Weinstein <lauren@vortex.com>, Rahul Tongia <tongia@andrew.cmu.edu>
There were a few things his interesting article made me think about.
[synopsis: where is congestion, how do you signal that, and how are the
2 linked]
Is the problem one of where local actions can have implications beyond
the local control, e.g., the cable company's AS/sub-Internet (capital
I)? For starters, do they have have the right to manage their local
system as they see fit (the bulk of the NN question). More importantly,
are there predictable and unpredictable implications on the rest of the
network outside their system? Related to this is the issue of where
congestion occurs. If it occurs more at the "last mile" then do we need
protocols that only impact such geographies? Are these determined by
points of congestion or points of control?
Is it fair to estimate that congestion in the core is the carrier's
"fault" while that at the edge is the end-users? If so, is the right
place to make changes at the application? A good torrent software would
be congestion aware. But that needs the carrier to give honest and
efficient (at the margin) signaling about congestion. If we then charge
users for it, that would be fair. By that measure, much of usage caps
may be inefficient. A more complex problem occurs if we believe there
is congestion in link n+1, i.e., beyond the very edge. How do we
allocate and signal that? If we start with a simplistic design (off the
top of my head!): core is carrier's fault/problem, edge is users' or
application's. The how do we vary congestion "fault" from one to the
other? In addition, wouldn't we need to know more about the topology
given the n+1 problem?
Say we have 10:1 oversubscription per link, scaling into the core in 3
steps, i.e., 1 mbps to 10 users sharing a 1 mbps link at the edge, 10
such edges sharing 10 mbps, and 10 such boundary nodes sharing a 100
mbps link to the core (1000 users). (In reality, there would be more
muxing at upper layers, but not 10:1) If congestion is at the outermost
edge, then it's more straightforward to conceptualize the edge changing
behavior via signaling or protocols. At the final backbone link, the
"cheapest QoS" might be to add bandwidth. What about in between. Short
of changing all the links, what protocol(s) will help this problem,
perhaps with signaling to the apps/user?
The power (electricity) world's solution was to separate capacity from
usage (kWh) charges - charge users for their share of coincident peak
load capacity). But the Internet is different - some packets can be
retransmitted/stored (including buffered)/delayed. The power system
was built to be reasonably close usage to maximum capacity but never too
close. In the web world, we seem to want to be as close to capacity as
possible (with the expectation that some apps can live with delays). By
that token, torrents are good, since they use "idle" capacity. The
problem becomes when the capacity they use is not idle.
BTW, we here worry about congestion at the cutting edge. Much of the
world grapples with congestion on a daily basis due to bottlenecks such
as mobile spectrum, international gateways, etc. Well, I guess we worry
about spectrum bandwidth a little here in the US too...
Rahul
Brett Glass wrote:
I have only one criticism of Richard's article: he understates his case.
In particular, because he neglects to mention one extremely important
point. By switching to UDP, BitTorrent will not only compete with VoIP
and some video and audio applications but also with DNS.
This could well be catastrophic (in fact, it could bring about the
"Internet meltdown" that Lauren postulated some years back). Why?
Because DNS (domain name service), as ISPs and network administrators
know all too well, is a "critical path" protocol in virtually every
application. If DNS is slow, EVERYTHING ELSE that users do will also
be slow. Remember, most network applications, including Web browsers,
have to stop and wait -- unable to do anything else -- until they
resolve one or more domain names. So, they'll hang frustratingly if
DNS packets are dropped due to congestion. And what underlying
transport protocol does DNS use by default? UDP. (It can use TCP as
well; however, it does so if, and only if, it has a lot of data to
transfer. And TCP, due to its complex handshaking and "slow start"
flow control, is much less efficient and much slower.)
So, what we're talking about is not just congestion but sand in the
gears of the entire Internet.
Also, because uTP does not conform to any explicit congestion
management protocol that could detect congestion BEFORE packets are
dropped, the only way it would be able to detect congestion in the
network would be after packets were dropped. Which means that by the
time it did anything -- IF it did anything -- to mitigate the
congestion it caused, it already would have damaged the network.
YMMV, but personally I wouldn't want to be on the same cable segment
with someone using this new version of BitTorrent.
--Brett Glass