cancel
Showing results for 
Search instead for 
Did you mean: 
Create Post

How are you keeping your data center 'green'?

Level 15

The data center is like a seed – with the right amount of water and sunlight it can help an organization flourish into an abundant flower! OK, sunlight and water are hard to find in your data center – for good reason – but the power needed to run a data center has to come from somewhere, all in the name of helping an organization grow into a productive, efficient, and agile body.

Data centers are one of the fastest-growing energy vacuums in the IT industry, and all that power can have an adverse effect on the environment. But at the same time, we know you’re under pressure to keep processes running and your organizations running smoothly, which sometimes means the environment takes a backseat. With Earth Day just around the corner, we want to hear the tips and tricks that are helping you keep your data center green, like utilizing more environmentally-friendly power sources or powering down infrastructure where possible.

Let us know what an IT environmentalist looks like by sharing ways to keep data centers running green by April 14th and we’ll plant 250 THWACK points in your account for you to grow whatever you like!

44 Comments
MVP
MVP

In my current environment we really don't do a lot to keep things "green." Here's what I've done other places.

Air flow is a very vital portion - Hot and Cold rows to direct air for efficient use of air conditioning. Without this there are hot spots, dead spots and vortexes that just eat up air conditioning. Also, make sure that all of your racks have all of the U space populated - not necessarily equipment, it may be equipment or a cover plate. Without that again you create hot spots where heat just swirls and never gets processed. Also, keep in mind the air flow of the fans in your appliances, switches etc. Many manufacturers will offer fan kits that can blow in forward or reverse to match the needs of your hot and cold rows.

Most modern equipment has "Green" features - fans that spool up as needed rather than running full speed all the time. CPUs that can ramp to the need. Use them, don't just set everything for Max because "bigger is better."

Buy what you need - while it's "cool" to have the latest multi CPU box with a gazzillion procs in each CPU but if your application(s) require the compute power of a 286 (yes, I'm dating myself - but I could go back much further) then you are just burning energy to have the "cool" factor. Irony? Being cool makes the data center hot.

Take a lesson from Cloud computing. In a virtualized environment have machines turn off when not needed. If a machine is up it is burning energy and creating heat.

Consider consolidation - Everything in this industry cycles. It used to be buy a server for each function, then it became buy less servers and add more functions to them, then virtualization came about and it became buy a big box and have it do lots of functions (again on disparate vms) consider putting multiple functions on machines or vms and reducing the total number of machines.

We "encourage" our data center vendor to run their environments in methods that optimize up time AND that reduce the vendor's costs.

Where we can, we utilize less hardware and more efficient styles of hardware.

And keeping the Chlorophyll running under the floors required us to hire special horticulturists who specialized in breeding and maintaining below-floor sod farming.    The plants absorb waste heat and transfer it into a recycling system that keeps the rest of the building's office areas fresh and pleasant smelling.

We had to decide between growing highly miniaturized cows or designing very low profile lawn mowers.  Both have their own unique polluting outputs.

pastedImage_0.png

Level 10

We had new non-illegal air con put in last year with less toxic refridgerant. Definitely not because the old units failed and everything nearly melted.

MVP
MVP

Most of my Solarwinds customers outsource their data centre requirements. Some of my customers have never seen their data centre equipment and in many cases its virtualised and multi-tenant anyway. I look back in fondness at the days when I used to visit data centres and see the equipment. Now I find that i'm monitoring remote systems all over the place and its rare that I ever get to see the hardware.

I drive an electric vehicle and I recycle a lot if that helps but increasingly as engineers we don't have access to the equipment anymore. Sad days.

Level 16

You house your data in a pyramid that draws power from an ancient secret power source

pastedImage_0.png

MVP
MVP

looks like a Thunderbirds base!

MVP
MVP

Nice rschroeder​ !  What about the mini cow patties ?

Of course those could be put into a methane generator and produce power to help run the datacenter...

We went to a Cisco UCS, migrated everything to VMware, so we reduced the number of physical servers from 14 to 3 blades, and one physical.  Use a EMC vnx5400 for all storage except backups, use motion lights that go off when you are not in the room.  have replaced all lighting with LED hosing lights and also dim them down when ever possible.   I however do like the Pyramid idea.   I am also trying to develop a series of magnets that will create energy and stay in perpetual motion, powering the entire planet, but i am a few months away from having that work. 

dg.magnet.02.jpg

MVP
MVP

Uranium is a good power source for this kind of thing.

Level 13

It's funny...and I can't really explain it...however:

Two years ago I put in a TED energy monitor in my server room - it has it's own electrical panel, so the installation was easy. It shows me my CO2, estimated monthly cost, and my energy usage...

The room only had a single AC unit, and I knew it wasn't enough when I started, so I've been pushing the facilities manager to install a second one. Finally in February, he had a second unit installed, which was connected to my monitored electrical panel...now the room is 63 degrees vs 75 degrees, and my energy usage DROPPED...not a lot, but about $15.00 per day...

Go figure.

Level 10

We recently updated our uninterruptible power supply and circuits.  They are more efficient and conserve resources.

Level 12

We run one server with VM for less hardware and set up VPN's

Level 16

It's a real building - Switch SUPERNAP

pastedImage_0.png

Level 10

For us it is as simple as having less physical machines and more virtual machines. Less power usage = more green!

Level 9

We fill all of the spaces in the Rack (using cover plates where needed).

Ensure the fans on all equipment are pushing air in the same direction.

switch to a virtualized environment, reduced 50+ physical machines down to 15.

Most of our servers have the high efficency fans that spin up when needed.

and most reciently we have been removing excessive equipment that was no longer needed.

  (I notice in alot of DataCenter cages, there are servers that are no longer in use but stll pulling power)

MVP
MVP

I remember marketing materials from about 5-6 years ago where HP was touting that the power management in their latest (at that time) generation of servers was so good that the power savings would pay for the cost of the actual server over 3 years. Now I didn't see the actual math, but if it were true a savvy finance person could easily justify upgrades.

Level 12

the old Steelcase Furniture HQ Switch converted.

Steelcase_Pyramid.jpg

The State of Michigan gave this company 0% sales tax as you mentioned in your post. That stirred the honey pot and now other data centers in Michigan are demanding the same "green".

Switch GRAND RAPIDS: Largest, Most Advanced Campus in the Eastern U.S. - Switch

If it is not a VM....someone needs to ask "Why nOT?"

We have reduced power consumption by 60% in the last seven years.

RT

Level 9

Never turn the lights on... feel your way across the room and read by the blinking lights of switches and routers...

MVP
MVP

I'd love to put a wind turbine on top of the building...but being in the approach path of a local airport we can't put much of anything on top of the building.

Ideally it would be a mix of wind and solar so like today with not much wind but lots of sun we'd still get power.

We have been tracking power consumption on the raised floor for a few years now.

Our Data Center facilitators are able to get real-time and trending on changes as a result of air flow, new units, new equipment, etc.

When they make changes they are able to see whether the costs go up or down.

Level 16

One of the 'lights out' data centers I worked in had a lightswitch on the left side of the door and this (below) on the right side.

I pointed out to the DC Manager that most people being right handed feel around for the switch on that side when entering a dark room.

A hinged plastic cover appeared the next day.

pastedImage_0.png

MVP
MVP

those plastic covers mean nothing to some people.

We had a small epo switch on the wall away from the door under a plastic cover with a large sign around it.

The contract person who was mopping under the raised floor hit that button to exit (not the red exit button beside the door) effectively shutting the down the data center....BAM !

Nothing like the sudden deafening sound of silence in a data center.....

Similarly, we were concerned about the vulnerability of an EPO switch, and we contracted with a licensed Data Center electrical company to redo the switches, relocate them, put protective coverings over them, put large red warning labels all around them, and train everyone how & when to use them.

That didn't prevent an electrician from folding up his ladder at the end of the day & leaning it up against the wall--against an active EPO switch that had no protective plastic cover on it.

BAM.  All power out in the data center.  OMG.

And some pretty severe embarrassment for the electrician and his company.

Level 14

We have gone to virtual assets wherever possible, to cut down on the number of physical machines.  We also now have dedicated HVAC systems for specific rooms.  We have found that this actually cuts power consumption, because each system runs less.  With some of the systems covering areas close to its max specification, it runs all the time.  Hot/Cold isle racking helps too.  We do also turn off Virtualization servers when they are not needed. 

We are migrating most of our physical servers to MPC & AWS based on the Project requirement. we have decommissioned a data center (took a year to complete the migrations to cloud).

You name IT - we have IT (Virtual environment Voice)

MVP
MVP

Mostly we are moving more hard servers to a virtual environment to cut down on power consumption and space being used. We also shut off lights and any monitoring screens when not in use to save energy. It also helps to make sure your hot and cool isles are kept isolated from each other. The less heat in the cool isle the less the A/C units have to ramp up to keep up with cooling.

MVP
MVP

When we built our most recent datacenter (about 10 years ago), the executives and people that matter visited various datacenters in the US and took note of what they liked about each one.  They then designed our datacenter based upon pieces of each one.  The majority of the building was built with recycled materials. When trying to get our official green certification, we missed it by only a few points.

Level 16

The UW operates three datacenters (consolidated from five), two of which are in walking distance to my office. [actually given enough time they are all in walking distance]

The UW Tower Data Center is one of the ~100 Energy-star rated Data Centers in the US: UW Tower data center earns ENERGY STAR rating for 3rd straight year | In Our Nature

The 4545/ADS Data Center has had a number of upgrades, and has expanded by 63% for only a marginal (<1%) increase in energy use.

Note: we're very competitive and into green initiatives: RecycleMania week two: UW still leads Pac-12 | In Our Nature

Level 20

energy-efficient-data-centers.jpg

Now that's a green DC!  We also are migrating to Cisco UCS and use VMware and NSX to reduce our physical server footprint.

Level 9

Our company has several dozen small to medium sized data centers throughout the country.  We have several initiatives in place to help reduce our footprint.  One of which is server virtualization.  If a physical server is needed, a lot of justification is required.  On a larger scale, we are consolidating out data centers into just a handful of regional data centers.  This is likely the single largest effort to becoming more green.  In that consolidation, we are also outsourcing those to shared data centers.

Level 9

We took what would be the obvious route.  Our data center had over 15 physical boxes (on older hardware).  Since we were doing an upgrade anyway, I convinced the powers that be to go virtual.  I wanted to go cloud, but they wanted none of that.  So we went from 15 physical boxes to 2 physical boxes and all virtual servers.  Simple, but a first step.

Level 16

We throw money at it 

pastedImage_0.png

Level 11

Points have been awarded!

MVP
MVP

Awesome, that took me over 125,000!

Level 9

I used to work for a company that tried to innovate based on their unique business model...I haven't worked there for over a decade but I never did find out if they were successful or not...Grave Magnet.gif

Level 21

Our most modern datacenter was built from the ground up to be a "green" datacenter.  Instead of using AC units we use indirect evaporate coolers and the water used for those is almost all collected by the rooftop of the datacenter and stored in an underground containment tank.  We are located in a very temperate environment with a good deal of rain so we rarely need to use city water for the coolers.  Also, all of the cabinets are inline chimney cabinets so all of the heat from systems is directly exhausted back to the cooling units as to make cooling as efficient as possible.  We also don't use battery UPS systems and instead use flywheel UPS systems.

MVP
MVP

A few things towards Green -

  • Virtualize, virtualize and virtualize ..
  • Power Management on endpoints using ACPI
  • Power Management on Servers with Virtualization
  • EnergyWise on Cisco Switches
  • Smart LED lighting
  • Ensuring proper air flows in the DC
Level 12

No, that's the Go'a'uld embassy building.

Level 12

At one of my previous employers that switch was much too close to the secure-door-opening big red button. Yes, after the unfortunate instance with the overnight security guard the dangerous button received a plastic cover. Not 100%, but it forced one to think a little before pressing the button.

Level 12

Our coffee packs are recyclable, and the coffee grounds are reused as fertilizer. As we all know this is the single greatest resource used by any group of techs.

We also carry out regular analysis of resource usage of our servers, reporting physicals that could be virtualized or decommissioned, and virtuals that could be merged or decommissioned. Even virtual machines need resources that can be freed up.

Level 7

Recently replaced aging cooling system.  Upgraded some old ethernet switches reducing both noise and power consumption.  Continually virtualizing when possible.  Looking to upgrade lighting to LED and reduce any remaining clutter in server rooms to increase air flow.  Change filters regularly too.

Level 9

Things we've been doing over the past 10 years since we built our current data center.  We have about 3000 sq ft of raised computer room floor space.  4 CRAC units in 2 completely redundant chiller loops to the roof.

Virtualization of servers - although we've grown our data enter to about triple the number of servers we had back in 2006, we actually have less physical servers now as we did 10 years ago.  We are at about 80% virtual.

Free cooling (2007) - no need to run compressors to cool during the cooler months of late fall, winter and early spring.

Hot aisle containment (2010) According to the Wisconsin Focus on Energy program director, we were the first in the state to do this.  So early in fact, there were no dollars available in the FoE program for this.  

Smart cooling (2013) - temp sensors determine which rows need cooling and the nearest CRAC will ramp up to meet that cooling need.

Digital blower motors on the CRACs (2013) - allow us to only run the blowers at whatever speed is needed to maintain the desired temp.  No need to be 100% or all off.

Because if these changes, we were able to raise the ambient temperature to about 72 degrees.  We no longer need to keep the computer room at 60 degrees to keep the servers within their operating temp range.

Initially, we needed to run 3 out of 4 CRAC units (both chiller loops) to keep the room cool.  Now we run 2 CRACs and 1 chiller loop switched monthly.

With all of these changes, we were able to lower our total energy used and the power bill for our computer room over the past 10 years, even though the cost per unit of that energy has increased by almost 50% in that same time.