NNSquad - Network Neutrality Squad

NNSquad Home Page

NNSquad Mailing List Information

 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ NNSquad ] Re: Electrical Analogy for Peak-Demand Pricing


Rahul makes a number of points that challenge the meaning of peak or
congestion pricing. To expand on the theme .

Once again we're prisoners of our metaphors and our tendency to accept
Malthusian scarcity. And, yes, let's not confuse bits an electrons. Both are
fungible but in different ways. With bits I may want my own information but
I don't care if they are wired or wireless or copper or fiber or red/green
or yellow/black. 

I'm reminded of a meeting at which the topic was how to share the scarce
capacity of the access points because people couldn't get addresses
assigned. I simply walked done the hall and got another access point to add
capacity. Or another meeting at which there was a kiosk with terminals but
no Wi-Fi. I asked the provider's engineer and he immediately added an access
point even though C&W had officially said that the they had used the last
DSL line for the kiosk as if that exhausted the capacity. The C&W talk was a
lament about how people weren't willing to pay a scarcity-based price for
their abundant bits. 

As with the modem crisis of the '90s - the claims are real if you accept the
problem as presented. Why do we accept the providers' business model and
architecture as unchangeable givens? A phone companies circuit-based
thinking creates scarcity by partitioning capacity into rivulets.

This is why I keep getting back to local ownership of our own facilities. I
remember Nynex (or whatever it was called) trying to sell me a 2400bps
office LAN in the early 1980's because that was their world. Fortunately I'd
already run Ethernet cables around the office by then.

What is the capacity latent in today's physical infrastructure? What if we
used new technologies and treated it as a common medium?

I want to be very careful to explain why I use DSL as an example. It's not
because copper is best but because it is a simple and real example. DSL
stuck in the 1980's because the carriers have little incentive to improve
it. What if tracked other improvements like Ethernet going form 10 to 1000
bps for campus networks during that period. What if we treated wire bundles
as a whole instead of limiting me to a single pair capped at 12000.00 feet?
For coax/fiber what if we used IP instead of reserving the bulk of the
capacity for faux-broadcast?

Where are these constrictions? It's as if I was told that I couldn't leave
my driveway to go to the corner store because there was a traffic jam
downtown.

Our time is better spent challenge the plaints about scarcity. After all
where else in computing and connectivity have we failed to track Moore's law
or exceed it?

   [ Actually, DSL *is* undergoing continual improvements (as has been
     noted here in NNSquad many times) -- with throughputs rising far
     beyond what might have been originally predicted -- using new
     modulation techniques and the like (at least over relatively
     short copper pairs).  But the bottom line is that this is really
     just a holding action by carriers (like AT&T) who have an
     enormous investment in copper that in most cases now may be quite
     a few decades old.  Nobody wants to invest in significant new
     copper plant, and the costs of copper maintenance (particularly
     for old buried cables) is very high.  The physics of DSL requires
     relatively closely spaced (fiber-fed) remote terminals to achieve
     higher speeds -- this is very expensive.  Meanwhile, copper
     keeps rotting.  See: "Verizon Copper Going To Seed" 
     ( http://bit.ly/cKbyVO [InfoTel] ).

         -- Lauren Weinstein
            NNSquad Moderator ]

    
  

 

-----Original Message-----
From: nnsquad-bounces+nnsquad=bobf.frankston.com@nnsquad.org
[mailto:nnsquad-bounces+nnsquad=bobf.frankston.com@nnsquad.org] On Behalf Of
Rahul Tongia
Sent: Sunday, May 09, 2010 09:58
To: Rollie Cole
Cc: nnsquad@nnsquad.org
Subject: [ NNSquad ] Re: Electrical Analogy for Peak-Demand Pricing

 

Rollie,

 

I do a lot of work on power systems (and smart grids).

 

A few points worth mentioning (in no particular order):

 

1) The marginal cost of electricity is quite non-trivial, unlike bits.

2) Electricity cannot easily be stored in scale.  Bits can be delayed

and/or retransmitted, within reason (based on the app)

3) The fungibility of electrons (electricity) is infinitely higher

than bits - I only want to receive MY bits, which implies it is both a

first mile and a last mile issue.

4) With electricity, reducing anyone's consumption, anywhere, helps

the overall system.  It is not as often the "last mile" (e.g.,

distribution transformer) is the bottleneck.

5) The actual uptake of off-peak or variable tariffs is quite low,

usually limited to larger consumers or specialized programs by

selected utilities.  There are actually 2 types of pricings we might

think of.  First, what you write about, "interruptible" or

"degradable" service (usage caps).  The second is actually varying the

tariff to incentive appropriate behavior, either through Time of Use

or, (proposed) real-time pricing.  We then come to more nuances, in

terms of periodicity of tariff updates, and separation of Economic

pricing and "critical" control pricing - the expectation is the former

should suffice 99.x% of the time.

6) With electricity, the main bottleneck to doing more is lack of

information, and hence the push towards smart meters as a step towards

a Smart Grid.  With the net, measurements are actually a little

easier, BUT, I claim, we are not measuring based on true *marginal*

scarcity (or at least that is hard to tell for outsiders.  Marginal

means both location and time.

 

So, to summarize how I see this - very small degradations or tweaks at

a very local level (wherever any bottleneck may be) for a short

duration should fix the majority of problems.  Of course, if the ISP

builds for 1-2 Mbps usage, and people expect to download HD video

regularly, then it might choke.  The harder Qs with this are (1) what

is the overhead and transaction costs; (2) To what extent should (or

should not) application awareness and integration play a role?

 

I am writing this sitting in India, where "peak" electricity is not

met through (expensive) peaking units - it is met via load-shedding.

 

Rahul

 

On Sat, May 8, 2010 at 9:56 PM, Rollie Cole <rolliecole@gmail.com> wrote:

> In addition to all the proposed new schemes for "time-of-day" pricing and

> the like, we do have actual experience with "peak-shaving" pricing in the

> electricity world. Large industrial firms that explicitly promise to allow

> themselves to be "browned out" (service cut back) in peak demand periods
get

> a lower rate. I get a lower rate because I allowed my electrical provider
to

> install and operate a device that will cut back my air conditioning during

> peak demand periods.

> 

> So one could imagine a rough analogy to static and dynamic IP addresses.

> Perhaps, as with IP addresses, the default is a the "brown-out" rate

> (throttled when and if the network is congested). The difference I would

> urge is that the details be as spelled out in advance as feasible, so

> end-users could begin to adopt use patterns that would help reduce
instances

> of congestion -- ie, do heavy downloading or uploading at unusual times,

> etc.

> 

> Those who wanted the standard "consumer electrical" deal (i.e., "best
effort

> at all times, regardless of peak") could get it, just as those who need a

> static IP can do so.

> --

> Rollie Cole

> 5315 Washington Blvd

> Indianapolis, IN 46220-3062

> 317-727-8940

>