I've been suggesting this kind of thing for years
http://www.news.com/8301-11128_3-9835281-54.html?part=dtx&tag=nl.e433
On Tuesday 18 December 2007 20:22:45 David Nicol wrote:
I've been suggesting this kind of thing for years
http://www.news.com/8301-11128_3-9835281-54.html?part=dtx&tag=nl.e433
Clearly you don't subscribe to any industry magazines. Yeah, been there, done that.
It's a _really, really_ good idea, and there have been implementations for maybe five or ten years, but yeah, it's out there.
On Dec 18, 2007 9:43 PM, Jonathan Hutchins hutchins@tarcanfel.org wrote:
On Tuesday 18 December 2007 20:22:45 David Nicol wrote:
I've been suggesting this kind of thing for years
http://www.news.com/8301-11128_3-9835281-54.html?part=dtx&tag=nl.e433
been there, done that. It's a _really, really_ good idea, and there have been implementations for maybe five or ten years, but yeah, it's out there.
i wonder if the likes of Dell have DC options. It seemed bizarre to me that our Dell blade system required 240VAC.
On Dec 18, 2007 8:22 PM, David Nicol davidnicol@gmail.com wrote:
I've been suggesting this kind of thing for years
http://www.news.com/8301-11128_3-9835281-54.html?part=dtx&tag=nl.e433
-- Looking back, I realize that my path to software as a career began at the age of seven, when someone taught me to count in binary on my fingers. _______________________________________________ Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
Actually, the standard telecom voltage has been 48 vdc, With two interesting triviata.
The term "Ring" refers to the ring of a plug that is the direct ancestor of what we use for musical instruments today
So "Tip" refers to the TIP of that same plug.
The other odd details of note are that in the original "Bell System" most phone wiring for POTS was on either Red Green Yellow 3w cable or Red Green Yellow Black 4c wire.
The oldest connections used Red and Green for the talk and dial circuit and the Yellow was often used for either Ringing or party line user identification -where the GREEN wire was POSITIVE and called TIP. The RED wire was Negative and called Ring The Yellow wire was called SLEEVE or the rearmost part of the plug with BLACK as a second sleeve in my recall of the systems. Curiously when I fact checked myself the Yellow wire use seemed to be regional and not totally consistent even in the same region..
Oren Beck
816.729.3645
DC is also great at: Corosion Explosions Arc welding Electrocution
I wonder if the cost savings take into account the price of all that thick copper needed to transmit DC throughout the datacenter. I've seen firsthand, 2 inch copper cables. There's a datacenter downtown that already has dc infrastructure in place, and a big battery, power stepping/switching, and UPS room. It might be more energy efficient, but I wouldn't bother with it, unless you could deliver +-12, =-5, and +3.3 to the rack. DC-to-DC conversion is notoriously inefficient. If you totally ruled that out, you might actually *see* some of the power savings. The other problem I'd see is with DC, the voltage (noticeably) decreases over distance. If that's noticeable in a datacenter, you'd have to have same-length runs to each rack. Might be especially nice if you didn't have to cool the large AC-to-DC substation as much as the datacenter, or if you could just put it on the roof. It will also probably be less safe, by the nature of DC not pulsing to release two shorted connections. DC circuit breakers are more expensive, and most folks are less experienced with DC.
On Dec 19, 2007 6:56 AM, Oren Beck orenbeck@gmail.com wrote:
On Dec 18, 2007 8:22 PM, David Nicol davidnicol@gmail.com wrote:
I've been suggesting this kind of thing for years
http://www.news.com/8301-11128_3-9835281-54.html?part=dtx&tag=nl.e433
-- Looking back, I realize that my path to software as a career began at the age of seven, when someone taught me to count in binary on my fingers. _______________________________________________ Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
Actually, the standard telecom voltage has been 48 vdc, With two interesting triviata.
The term "Ring" refers to the ring of a plug that is the direct ancestor of what we use for musical instruments today
So "Tip" refers to the TIP of that same plug.
The other odd details of note are that in the original "Bell System" most phone wiring for POTS was on either Red Green Yellow 3w cable or Red Green Yellow Black 4c wire.
The oldest connections used Red and Green for the talk and dial circuit and the Yellow was often used for either Ringing or party line user identification -where the GREEN wire was POSITIVE and called TIP. The RED wire was Negative and called Ring The Yellow wire was called SLEEVE or the rearmost part of the plug with BLACK as a second sleeve in my recall of the systems. Curiously when I fact checked myself the Yellow wire use seemed to be regional and not totally consistent even in the same region..
Oren Beck
816.729.3645 _______________________________________________ Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Billy Crook wrote:
DC is also great at: Corosion Explosions Arc welding Electrocution
:)
Your comments are generally correct, but don't take into account current technology. Comments inline.
I wonder if the cost savings take into account the price of all that thick copper needed to transmit DC throughout the datacenter. I've seen firsthand, 2 inch copper cables. There's a datacenter downtown that already has dc infrastructure in place, and a big battery, power stepping/switching, and UPS room. It might be more energy efficient, but I wouldn't bother with it, unless you could deliver +-12, =-5, and +3.3 to the rack. DC-to-DC conversion is notoriously inefficient. If you totally ruled that out, you might actually *see* some of the power savings.
Not any more. DC-to-DC conversion can now be *VERY* efficient, thanks to low RDs-On MOSFETs, high switching speeds, and a variety of interesting switching topologies (ie: the multi-phase DC-DC switcher that's likely sitting on the MoBo of your computer, feeding 50-100 Amps of low-voltage into your CPU with tight regulation).
The other problem I'd see is with DC, the voltage (noticeably) decreases over distance.
No more so than AC. Voltage drop is a function of the resistance of the wire and the current you're passing through it. The reason AC works better for long distances (ie: from your house to the power plant) is they run the transmission lines at *VERY* high voltages. High voltage and low current = lots of power with low resistive losses. The voltage is reduced at power substations, and again at the transformer behind your house. DC is actually slightly *MORE* efficient than AC, as you only have resistive losses in the cable. AC also includes losses due to the impedance of the transmission line (parasitic inductance & capacitance making a low-pass filter that increases the effective resistance of the line).
Back in the day, the technology didn't exist to (easily) convert DC power from one voltage to another, but transformers did the trick for AC. With modern power circuits, however, you can convert pretty much anything to anything with very high (90+ percent) efficiency.
If that's noticeable in a datacenter, you'd have to have same-length runs to each rack. Might be especially nice if you didn't have to cool the large AC-to-DC substation as much as the datacenter, or if you could just put it on the roof. It will also probably be less safe, by the nature of DC not pulsing to release two shorted connections. DC circuit breakers are more expensive, and most folks are less experienced with DC.
It sounds like you're referring to arc quenching, which is mostly of concern for things like relays and mechanical switches. For active circuit protection (think GFCI outlets for AC vs. a mechanical breaker) the costs should be basically identical between AC and DC, although obviously demand and production volumes play a roll.
As far as safety, generally, AC won't make you any less dead than DC if you grab onto something you shouldn't. High frequency AC will create a 'skin-effect' where the bulk of the current will run along the surface of a conductor (ie: fry your skin which can recover and not your internal organs which won't), but 60 Hz power is too low a frequency to exhibit any meaningful additional safety margin vs. DC due to skin effect. Even much higher frequencies are still very dangerous:
http://en.wikipedia.org/wiki/Tesla_coil#The_skin_effect_myth
- -- Charles Steinkuehler charles@steinkuehler.net
I can't remember the name of the company that has been advertising datacenter equipment that uses common DC power supplies instead of AC with seperate PS units for each system, but they've advertised in the linux magazines for a year or so now.
I believe the DC power station that served a small section of New York City was in operation from when Edison started it until earlier this year, and has just been shut down.
Oh yeah, and on the yellow/black local power, black was ground. Some 70's - 80's telco equipment still likes to see ground on black. I had a modem that would pick up a local country radio station until I grounded that line.
This summer while checking out possible new data centers, one our eval outings our team visited was a former telephone crosslink site. It was over in Shawnee. That site HAD 48V DC and a significant DC bus system installed. There was lots of shiny copper and an impressive power / UPS room.
On what could connect to it. Looking at the options, I think some of the HP ProLiant systems have Telco versions that can be fed DC rather than standard AC power supplies. One in particular, the HP P-class Blade servers has external power sources and they feed 48V DC into the chassis. HP even has a rack power setup that can stack multiple blade chassis and tie it all to a 48V system or a common AC/DC supply to stack them.
You DON'T want to drop a screwdriver across it or you WILL see stars.
Fred Armantrout Burns & McDonnell Engineering Co.
Your right on the money as far as the HP server go. Although the P-class blade enclosures are the older outdated blade enclosures that are very cumbersome to install and manage the newer C-class blade enclosures can also be ordered as "carrier grade" systems with AC or DC power options. There is also an HP Integrity "carrier grade" system (runs Linux, HP-UX, Windows or OpenVMS) and HP Intel Xeon based "carrier grade" system (Linux and Windows) both of which use DC power.
http://www.hp.com/products1/servers/carrier_grade/products/bl460c-cg/ind ex.html http://www.hp.com/products1/servers/carrier_grade/products/cx2620/index. html http://www.hp.com/products1/servers/carrier_grade/products/cc3310/index. html
I have seen very few of these installed, mainly because the Telco's deal directly with HP and not through a partner.
Phil
-----Original Message----- From: kclug-bounces@kclug.org [mailto:kclug-bounces@kclug.org] On Behalf Of Armantrout, Fred Sent: Friday, December 21, 2007 10:27 AM To: kclug@kclug.org Subject: RE: didn't someone tell me that telco equipment had 40vdc racks once?
This summer while checking out possible new data centers, one our eval outings our team visited was a former telephone crosslink site. It was over in Shawnee. That site HAD 48V DC and a significant DC bus system installed. There was lots of shiny copper and an impressive power / UPS room.
On what could connect to it. Looking at the options, I think some of the HP ProLiant systems have Telco versions that can be fed DC rather than standard AC power supplies. One in particular, the HP P-class Blade servers has external power sources and they feed 48V DC into the chassis. HP even has a rack power setup that can stack multiple blade chassis and tie it all to a 48V system or a common AC/DC supply to stack them.
You DON'T want to drop a screwdriver across it or you WILL see stars.
Fred Armantrout Burns & McDonnell Engineering Co.
Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
I will sleep better at night, now that I know that information. In fact, I'm going to keep a copy by the bedside for when I can't quite get to sleep.
Brian :) ------------------------------------------------------------------------------------------- Actually, the standard telecom voltage has been 48 vdc, With two interesting triviata.
The term "Ring" refers to the ring of a plug that is the direct ancestor of what we use for musical instruments today
So "Tip" refers to the TIP of that same plug.
The other odd details of note are that in the original "Bell System" most phone wiring for POTS was on either Red Green Yellow 3w cable or Red Green Yellow Black 4c wire.
The oldest connections used Red and Green for the talk and dial circuit and the Yellow was often used for either Ringing or party line user identification -where the GREEN wire was POSITIVE and called TIP. The RED wire was Negative and called Ring The Yellow wire was called SLEEVE or the rearmost part of the plug with BLACK as a second sleeve in my recall of the systems. Curiously when I fact checked myself the Yellow wire use seemed to be regional and not totally consistent even in the same region..
Oren Beck
The term "Ring" refers to the ring of a plug that is the direct ancestor of what we use for musical instruments today
So "Tip" refers to the TIP of that same plug.
The other odd details of note are that in the original "Bell System" most phone wiring for POTS was on either Red Green Yellow 3w cable or Red Green Yellow Black 4c wire.
The oldest connections used Red and Green for the talk and dial circuit and the Yellow was often used for either Ringing or party line user identification -where the GREEN wire was POSITIVE and called TIP. The RED wire was Negative and called Ring The Yellow wire was called SLEEVE or the rearmost part of the plug with BLACK as a second sleeve in my recall of the systems. Curiously when I fact checked myself the Yellow wire use seemed to be regional and not totally consistent even in the same region..
More worthless information. I worked for the teleco in Muscatine, IA for a summer in 1977. I got to remove people's phones and install the modular plugs. They only used the red and green. The yellow and black were used for the power for the lights in the princess phones. They had lighted dials so you could see them at night. They were wired to a small transformer usually in the basement. So if the power went out the phones would still work but the light in the princess phones would go out.
During the 60's & 70's, black and yellow were used for in-home wall warts that provided power to things like Princess Phones with light-up dials. Eventually a bad batch of these caused a number of house fires, and the system was discontinued, made unnecessary as solid state technology needed less current and could take power off the line voltage.
Jonathan Hutchins hutchins@tarcanfel.org wrote: During the 60's & 70's, black and yellow were used for in-home wall warts that provided power to things like Princess Phones with light-up dials. Eventually a bad batch of these caused a number of house fires, and the system was discontinued, made unnecessary as solid state technology needed less current and could take power off the line voltage.
Now that you mention it, I remember the recall on that batch of transformers. That's been a long time ago.
Should have read the article more carefully - Rackable is the company I'd heard of before.
To one of the idle speculators earlier, they claim savings in both power consumption and in lower cooling load (hence additional power savings).
On Dec 19, 2007 6:03 PM, Jonathan Hutchins hutchins@tarcanfel.org wrote:
Should have read the article more carefully - Rackable is the company I'd heard of before.
To one of the idle speculators earlier, they claim savings in both power consumption and in lower cooling load (hence additional power savings).
Hey, Rackable, that's where Dr Bob went to work after he left Apple. Rackable lets him play with some nifty stuff: http://www.applefritter.com/node/21248 He's probably got some very good knowledge of the DC systems if anyone wants to know more in depth stuff.
Jon.
To me this seems like overkill for what can be nearly accomplished with 240V 3-phase power. Most enterprise class systems have the option to be delivered with 240V 3-phase which is about as close to DC as you can get on the AC side but doesn't require the conversion and consequently the heat generated by the conversion that requires additional cooling.
If there is a desire to be more green in a computer room then it should start with the actual computers themselves. Most computers run constant speed fans that are consuming a large amounts of electricity when it is not needed. If a CPU and other components of a system are not being used then they do not need to be cooled as much. If a disk drive is not being accessed as much then it doesn't need to be cooled. Integration of the cooling with the monitoring of the systems usage to reduce the cost of cooling systems is the direction that should be taken.
Then there is the whole idea of having large numbers of servers sitting in a computer room with each server consuming electricity while they are being utilized at an average of 20%. I still don't understand how a company can look at a computer room full of pizza box servers running at 20% utilization and feel that they are getting their monies worth. Consolidation of servers into either blade enclosures that are running some kind of a virtualization software platform or larger servers that allow multiple layers of consolidation is the biggest bang for the Green dollar there is. There are companies who have cut their computer room costs by millions of dollars per year by consolidating hundreds of servers down to tens of servers.
The cost involved with AC to DC conversion of computer rooms would be unrealistic in most facilities where tremendous cost savings can be obtained by simply changing to newer technology.
Phil
-----Original Message----- From: kclug-bounces@kclug.org [mailto:kclug-bounces@kclug.org] On Behalf Of David Nicol Sent: Tuesday, December 18, 2007 8:23 PM To: kclug Subject: didn't someone tell me that telco equipment had 40vdc racks once?
I've been suggesting this kind of thing for years
http://www.news.com/8301-11128_3-9835281-54.html?part=dtx&tag=nl.e433
-- Looking back, I realize that my path to software as a career began at the age of seven, when someone taught me to count in binary on my fingers. _______________________________________________ Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
The point of all this is that instead of every piece of equipment having it's own switching power supply, with fan, you supply the required voltages to the whole rack from a common pair of failover powersupplies. Each box then gets it's on +12,+5, and -5 (or whatever), and we have one less component per unit to fail.
Converting the power once then distributing it really does beat distributing the AC and converting it at each unit, there are savings in equipment cost, efficiency, and cooling.
Because of this, a 48 volt distribution system doesn't make as much sense, nor does it offer as much advantage over a 120/240V AC distribution system.
Unless those DC rack supplies are ten times more expensive to replace and more likely to fail than a server power supply and harder to find.
On Dec 21, 2007 1:52 PM, Jonathan Hutchins <> wrote:
The point of all this is that instead of every piece of equipment having it's own switching power supply, with fan, you supply the required voltages to the whole rack from a common pair of fail over power supplies. Each box then gets it's own +12,+5, and -5 (or whatever), and we have one less component per unit to fail.
Converting the power once then distributing it really does beat distributing the AC and converting it at each unit, there are savings in equipment cost, efficiency, and cooling.
Because of this, a 48 volt distribution system doesn't make as much sense, nor does it offer as much advantage over a 120/240V AC distribution system.
Another consideration is that this would effectively put 42 machines on the same 12v rail, and the same 5v and so on. So if a component in one machine failed in such a way that it shorted across that rail, it would take all devices on the rail down unless they each had individual load breakers for each rail. There's also a fairly good chance the short would happen during node insertion, so the breaker would need to be outside of the node, possibly ruling out the cheap bus-bar idea unless the breakers were inbetween the busbar and chasis.
On Dec 21, 2007 2:00 PM, Brian Kelsay ripcrd@gmail.com wrote:
Unless those DC rack supplies are ten times more expensive to replace and more likely to fail than a server power supply and harder to find.
On Dec 21, 2007 1:52 PM, Jonathan Hutchins <> wrote:
The point of all this is that instead of every piece of equipment having it's own switching power supply, with fan, you supply the required voltages to the whole rack from a common pair of fail over power supplies. Each box then gets it's own +12,+5, and -5 (or whatever), and we have one
less component per unit to fail.
Converting the power once then distributing it really does beat distributing the AC and converting it at each unit, there are savings in equipment cost, efficiency, and cooling.
Because of this, a 48 volt distribution system doesn't make as much sense, nor does it offer as much advantage over a 120/240V AC distribution system.
Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
On Dec 21, 2007 2:09 PM, Billy Crook billycrook@gmail.com wrote:
Another consideration is that this would effectively put 42 machines on the same 12v rail, and the same 5v and so on. So if a component in one machine failed in such a way that it shorted across that rail, it would take all devices on the rail down unless they each had individual load breakers for each rail. There's also a fairly good chance the short would happen during node insertion, so the breaker would need to be outside of the node, possibly ruling out the cheap bus-bar idea unless the breakers were inbetween the busbar and chasis.
if DC-powered racks were standardized, with bare copper on some spot on them, and a circuit breaker on each component meant to be slid in, that would work and would be no more complex than, say, Dell blades. In fact I wonder if Dell blades don't do that already, with the power supply being the unit that the individual "blade" is inserted into.
On Friday 21 December 2007 14:00:02 Brian Kelsay wrote:
Unless those DC rack supplies are ten times more expensive to replace and more likely to fail than a server power supply and harder to find.
In which case, the claims they make on their web site and in their ads are fraudulent and illegal! Quick, call Luke Jr!
On Friday 21 December 2007 14:09:06 Billy Crook wrote:
Another consideration is that this would effectively put 42 machines on the same 12v rail, and the same 5v and so on. So if a component in one machine failed in such a way that it shorted across that rail, it would take all devices on the rail down unless they each had individual load breakers for each rail.
You don't suppose this occurred to them?
Seriously, guys, have a look at the web site and see if maybe they've addressed your speculation before you raise objections that they've already covered.
These things are real, they've been selling for several years, and they're not a myth or a rube goldberg contraption. The company has numbers that say they save significant operating costs, so they must be doing something right.
Not having priced them, not having an operating budget that includes power, cooling, and maintenance for a site, I don't know if they save enough to pay for the hardware.
The _ONE_ objection I see is if you buy their system, you're locked in to buying replacements and upgrades from them, since there isn't a standard - yet. Having delivered enough very expensive RAID cards that became unavailable within the MBTF for the cards, that would be a major objection for me.
On Dec 21, 2007 6:21 PM, Jonathan Hutchins hutchins@tarcanfel.org wrote:
Seriously, guys, have a look at the web site and see if maybe they've addressed your speculation before you raise objections that they've already covered.
No one seems to have read the parts about them having DC systems for single racks or rack rows. No need for an entire data center to go DC with a huge infrastructure. http://www.rackable.com/products/powerefficiency.aspx?nid=datacenter_1
Jon.
In a roundabout way, you and I are saying the same thing. Take a look at it again. If they are hard to find, like your RAID cards, they become expensive. Brian
On Dec 21, 2007 6:21 PM, Jonathan Hutchins hutchins@tarcanfel.org wrote:
On Friday 21 December 2007 14:00:02 Brian Kelsay wrote:
Unless those DC rack supplies are ten times more expensive to replace
and
more likely to fail than a server power supply and harder to find.
In which case, the claims they make on their web site and in their ads are fraudulent and illegal! Quick, call Luke Jr!
The _ONE_ objection I see is if you buy their system, you're locked in to buying replacements and upgrades from them, since there isn't a standard - yet. Having delivered enough very expensive RAID cards that became unavailable within the MBTF for the cards, that would be a major objection for me.