NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: define: "service providers have to manage their networks somehow, especially during peak times."


Nick, et al,

Verizon's SEC filings indicate a $1000 per sub installation cost. Real guess based on my install, about $1500/sub. The telco recoup period is 48 months. always has been. They'll do so in my neighborhood easily, and I'm on the expensive side (underground digs compared to aerial installs).

That's accurate, as I was part of a startup in the 1980's (we were project stargazer in the 1990's) where we had costs at that time down to $1700 per sub for FTTH installation with similar service levels (literally identical design/data-rates to the earlier BPON FIOS installations; that was in 1988 folks, now call up your RBOC and ask WTF? has taken you so long:-) ).

That was 1988 dollars, apply both CPI increases and moore's law decreases, and you'll get the idea that the above numbers are likely not far off in today's dollars either.

Using multiple TCP connections isn't inherently bad behavior. Your web browser, email programs and the like use multiple connections all the time. Works just fine. Besides, in its default "condition" there's no way to saturate a network with a single TCP stream. (we've even done testing on windows vista's "adaptive" stack, it performs no better than win2k in default configuration mode)

my own home network, one NAS, four desktops, NO P2P: NAT states (Sorry, I don't break down between UDP or TCP) over past two months: 10.00 min, 62.70 average, 853.57 max.

http://www.nyquistcapital.com/2007/05/31/state-of-the-photon-global-ftth-activity/
and several related analysis (financial) also include traffic data that show upward trending "even with FTTH" that does not saturate 100Mbps links at all. If anything, the averages measured are < 1Gbyte per day in downloads on average. (500 of those days crammed into a month is less than 2Mbps continuous).


Sorry, but NTT's own data does NOT show a complete overrun of the network. The Japanese Ministry of Internal Affairs (MIC) released data illustrating traffic growth (see link) The MIC released a white paper in August of 2007 that estimated the total average bandwidth used by DSL and FTTH subscribers in Japan. The MIC monitored traffic to and from 6 large ISP nodes that represented approximately 40% of all broadband traffic.

http://www.nyquistcapital.com/2007/09/10/the-bandwidth-explosion-myth/

Month/year	Upload Gbps		Download Gbps
Dec-04 	            276 	                  323
Jun-05 	            320 	                  425
Dec-05 	            349 	                  468
Jun-06 	            412 	                  524
Dec-06 	            463 	                  637
Jun-07 	            517 	                  722

FTA: These are raw traffic numbers that represent the average Gb/s traffic rate over a given month. Peak rates in the evening are around 2.3x trough rates in the early morning. In short, total Internet traffic increased just over 2x in the last 2.5 years, with a CAGR of 38%. Not as high as you would expect, but not terrible either. However, these numbers need to be adjusted for broadband subscriber growth in Japan shown above.

The average upload/download per sub was measured at well under 500Mbytes/day in either direction. That's not even 1Mbps with peaks in the 5Mbps per sub.

I can also confirm that in Verizon's FTTH, there is NOWHERE near even 10% utilization on the 20/5Mbps default offering.

Regards,
andy


Nick Weaver wrote:
During times of congestion, "best effort" fails to provide a notion of
fairness that USERS would find equitable.

Firstly, TCP congestion control is NOT about "keeping an ailing
network working", its about providing a balance between flows when the
desired use by all parties exceeds the available bandwidth of the
network.  Thus the obsession with "TCP Friendly", you want to
as-fairly-as-possible share a constrained resource.

The core is overprovisioned, because bandwidth in the core is cheap
(its lighting more fiber).  But bandwidth to the PoP, and bandwidth
between the PoP and end user, is not, and short of replacing the last
mile infrastructre (eg, FIOS, at what Verizon claims is $10k/customer,
and even at $2k/customer would be a lot of cash), will not be
overprovisioned.   (And even replacing the network may not be
sufficient: usage grows to absorb available bandwidth seems to be a
rule of thumb.)


So, yes, in an ideal world, you would overprovision your network. But that is notgonnahappen.com when users pay flat-rate and look at the "Speed" as what the provider claims (which is at 4am, or to the whitelisted bandwidth-test site) rather than real-world throughput, so they can't even know the difference.


And there is nothing fundamentally wrong with underprovisioned networks, as long as the resources are allocated fairly and the network is still usable. Remember, the Internet was suable for the most part from 28.8kbps modems. The iPhone, with the pathetically slow EDGE network (~50 kbps) is still usable (not great, mind you, I'm waiting for the 3G myself, but usable nonetheless).

So giving, say 1000 users a link which at idle time works at 8 Mbps,
but at peak works at 1 Mbps, is still sensible (and 8x cheaper on the
bandwidth costs).  So IF you can fairly allocate an underprovisioned
network, MOST of the users are still happy, it costs less money, and
all is right and good.


The problem however, is, TCP's notion of "Fair" does not match what people would call "fair". Which forces the network owner to do something, because end hosts are effectively adversarial:

User A has 5 torrents downloading, each with 4 connections open.  This
is 20 TCP flows, all at high volume.  Or is using another method (e.g.
Joost) which violates the conventions of congestion control
completely.

User B is watching a YouTube video of a baby dancing.  This is one TCP
flow, but still fairly high volume. [1]

User C is doing some websurfing, generally a few small TCP flows.

User D is trying to call a friend using Skype.

All are experiencing congestion, (and in reality, for every A you have 10 B/C/D)

What will happen is that unless A's congestion is due to behavior on
the other side, for every byte B or C is able to receive, A receives
20.

At the same time, D will be experiencing a boatload of jitter (and
perhaps drops), and his call will be effectively unusable.

Hardly seems fair, does it?  Yet from the TCP best-effort model, this
is perfectly fair.


Traffic management would be traffic shaping designed to maintain some notion of user fairness, rather than flow fairness, so that for every packet A gets, B and C get a packet. And during peak congestion, when even that isn't sufficient, observe that over the past two hours, A got 100x the data B did, so its OK to reduce A still further, to say 1/2 or 1/4 of what B and C are getting. [2]

And unless and until the Bs and Cs are so numerous, and/or the network
is so underprovisioned, cutting back A to the level of B and C is a
huge win for the Bs and Cs, who grossly outnumber the As yet are
paying the same.  So A gets hurt, Bs and Cs win.  Since they are
paying the same, why shouldn't this be good for B and C?


Likewise, detecting that D is doing a low-latency realtime application, and then prioritizing the traffic for D, improves D's user experience without affected A, B, and C much (Ds are low bandwidth). Yes, end to end QoS would do the job better, but again, thats notgonnahappen.com. Thus D's experience is greatly benefited if the ISP is able to automatically classify the protocol accurately and prioritize it.


Additionally, overprovisioning is not a fair solution to B, C, and D in the flat rate pricing world. If you overprovision to the point where you stop getting congestion, A is going to benefit hugely (the As, generally, will take all the bandwidth they can get), B a bit (youtube is starting to be limited by realtime behavior, not congestion for many users), and C and D hardly at all. Yet A, B, C, and D will all have to pay the costs equally.


So if you want an overprovisioned network, pay for it! Vote with your pocketbook! There are plenty of business-grade providers who will happily sell you a network with a Service Level Agreement. (As a bonus, you are likely to get a provider who will know what a "trouble ticket" is.), and even some consumer networks (eg, Lariat) that will guarentee you an allocation.

Likewise, if you have demand-based billing, an overprovisioned network
makes sense.  Those who us it pay for it.  If I'm an ISP with demand
billing, I'm going to be properly/over provisioned, because I want to
extract every dime from the As.

But if you want Cheap, (reasonably) fast, flat-rate, and usable, you
need traffic shaping and management to keep the A's from outcompeting
everyone else.

Because otherwise A will be happy (his downloads are still fast, even
at 7pm), but B, C, and D will all be miserable, and shift to a
provider which does favor them.




[1] THe reason B is not using a boatload of flows is twofold. First there is a convention that the web browser should only have 4 TCP flows transmitting at a time. But moreso, the webservers themselves have incentive to enforce this, because they don't want one user taking up too much of their bandwidth to the detriment of other users.

[2] Additionally, heavily shaping the asymmetric leeches/seeds doesn't
generally hurt A at all (it hurts the OTHER bittorrent users, but not
A), but can be a huge benefit for B, C, and D if the uplink is
experiencing congestion.