cancel
Showing results for 
Search instead for 
Did you mean: 
Create Post

How Monitoring can help to break up the Silo Mentality

The Root cause of the Silo thinking

I had to face the silo problem many times in the past. In IT many organizations work in silos. For example, the Server department only cares about the problems concerning Servers. When a Ticket with a problem

arrives often it starts a process that I call: "Ticket Ping Pong" Instead of solving the problem it is easier to forward the ticket to a different IT department and let them take care of it.

Some User Help Desks assign all tickets to the Networking Team because "It´s always the networks fault “. With that mindset people working in the UHD are in the position of proving that your system is not

responsible for causing the problem. But that isn´t the best approach in many cases. Problems could be solved quicker and more effectively if everybody would work together.

One Monitoring to rule them All

It is very common that every IT department has its own monitoring in place. Often a high specialized system that is directly coming from the hardware vendor. A shiny little box from the well trusted vendor they

have been using for ages. These systems have their benefits and are often a combination of management and monitoring. So for example for Server guys there are no problems unless they show up in

their monitoring systems. For a specific problem that is only related to one system that is working. But in the real world you are facing often more complex problems that are related to multiple systems. You need a

monitoring that covers all the complexity in your infrastructure and can handle different device types and services. The highly specialized vendor specific monitoring can coexist with that. But all the IT departments have to

break up the silos and start to work together. The general thinking should be we are all in the same boat. A monitoring project can build bridges and bring the different IT departments closer together.

The goal should be to have all systems in the environment on the same monitoring at the end of the day. That creates visibility and trust. When everybody is looking at the same monitoring they share the same knowledge of

what is going on when a ticket shows up. When the server admin sees in one pane of glass that the Firewall is running on 100% CPU utilization he knows how to address the ticket and that it is maybe a good idea to wait for a

feedback from the firewall guys.

In times of virtualization and SDN this is even more important. There are so many dependences between the different parts of the infrastructure that your initial entry point is hard to figure out. Sometimes the problem is hiding

behind the different layers of virtulization. It is a big effort to bring all the systems to centralized monitoring, but it absolutely worth the effort of doing it. At the end of the day all Software defined anything runs on hardware and

that hardware needs to be monitored.

23 Comments
MVP
MVP

Ahh...

I have seen this work and I have seen it fail.  The difference is where IT from the TOP (senior management) down embraced it.

That is the only way to truly break up, well actually only weaken the silos...yes political infighting between teams will still occur. But to

mandate that all alerting, notifications, and event based ticketing will go through a central point and that the owners of the point solutions must

provide output to the monitoring team will make it go farther.  It takes time....

Level 17

I couldn't agree with Jfrazier​ more. If you have the mandate from the top, it always makes things easier.  Anything overarching like help desk projects or ones with a security intent such as SIEM tends to bring the various teams together. Although many of the SolarWinds solutions help you with this goal of breaking down silos, the one solution that I have seen break silos faster than anything I have experienced is Database Performance Analyzer. I completely agree with you that monitoring has the potential to foster collaboration.

Level 14

I think we all have seen this.  Using a single monitoring tool that is used by all can help bridge those silos in short order.

Level 14

I agree.  It doesn't mean that each team can't have their own tool for specifics.  Some of the tools provide more detailed views and additional functionality that you may not be able to/want to do without.  There should however be a over-arching or go-to (first look) tool used by all.

Silos are naturally forming and are usually inevitable when the Roles & Responsibilities are dished out. Every so often these silos are destroyed and then reformed after the inevitable reorg. You know... the reorg to. "...take advantage of obvious efficiencies."

I say, "Embrace the silos!" Monitoring should have its own silo. The NOC should be independent of Operations and Administration. The NOC is the watchdog, the auditor, the preverbal, "feet to the fire!" for all the other groups to answer to. The independent NOC monitors, alerts, and reports on all activities, and the other departments have to answer... not just to the NOC, but to management as well.

I agree with much of what has been commented above, and I'll take it a step further:  Nothing should stand in the way of C-Level or Director-Level administrators from being hands-on, from directing staff to cooperate better, and to break through silo walls.  But you'll have seen that something is getting in the way.

People haven't walked a mile in the other person's shoes, and haven't seen the big picture view from that single pane of glass, thus they have no frame of reference for how improved life can be with those experiences.  A solution (not the only--just one solution) can come from cross training.

I worked in an organization where staff rotated among departments and were trained by their peers to do many different jobs.  When this happened, morale improved, problems decreased, and stress went down.  People became more valuable as they picked up new skills, and they recognized problems they'd missed previously.  In some cases they even contributed to new and better processes and solutions.

Doing this in IT can be a real challenge due to the widely different skill sets required deep in each silo.  But it works. I've worked at several places where top management required folks in different departments to cross train each other, and the result was tremendously positive.  They developed better communication skills, understood each other's challenges better, and even were able to make suggestions and contributions that improved conditions and reduced time to correct problems.

Not to mention we were all able to cover for other people if someone took vacation, became ill, or moved to another employer.  It was win-win all the way around, because our morale significantly improved, we felt better about ourselves, and we discovered learning and sharing helped us better understand how each of our departments meshed with others.  As a result, we were better paid, had fewer problems, and were able to resolve issues much more quickly.

From Management's point of view, workers became more productive, and costs and down time decreased.  Employees thus became more valuable as errors and downtime decreased, the company became more profitable, and the employees got raises because they were more valuable with the additional training.

This is called "The Hawthorne Effect", and General Electric discovered it by accident when trying to discover how changing lighting conditions in their Hawthorne plant might improve or decrease productivity.  They told the employees they were experimenting with different lighting to see what the workers liked better.  The workers liked Management trying to improve their environment--even though the end goal was to improve productivity.  G.E. was surprised to learn that EVERY lighting change--whether intuitively better or worse--resulted in improved productivity. Their studies turned up the fact that workers are happier when they see Management is interested in them, and is even willing to spend money on improving life at the plant.

IT Management can duplicate this by getting the right Solarwinds products in place, and cross training people to better know each others' jobs.  I'm not saying I could step out of my role as a Network Analyst and become a Web Marketer or Apps Analyst or DBA; I'm saying I'd love to know more about the details of their roles, and to share my job skills with them.  When I've worked at places that have done the cross training, it's always worked out well for the business and the employees.  Costs go down, productivity goes up, and moral takes a great turn up, too.

Do it!  You'll be glad you did.  And what do you have to lose?  Maybe someone will be able to see a process improvement in my world that I can benefit from.  Maybe I'll come up with solutions for their worlds, too.  And we all become a better team as a result.

Great post, rschroeder​!

I find that, in my own role, learning about other technologies that would traditionally be left to the 'other team', I find I'm able to pick at my clients issues from multiple different perspectives, and find out of the box solutions that I never would have considered before.

I come from a server admin background, and now I have to think about impact on the network (traditional, wifi and occasional VoIP), database, application AND server of implementing SolarWinds Orion properly, for any given client. Whilst I'm a loooong way from being expert in all of these fields, each deployment teaches me more about one of these.

In my view, a good SolarWinds consultant has to have experience all the way up the OSI model, from the physical (deploying kit) to the app (Orion, SQL and the OS it all sits on) to do a great job. All this before even taking in to account the 'soft skills' of presentation, documentation and public speaking. Oh, and then you have to add in the ITIL bits, look at how the Ops guys can fit Orion in to their processes, or how to rewrite these now that Orion is in play.. there's so much to it and I'm loving every minute of it!

Level 9

Thanks for all the interesting feedback.  In my expierence the enduser / customer is not interested witch IT department has fixed the problem. The IT is for an outsider one big building blog.

They have no idea of the different Silos inside of the IT. So if it is possible to improve , we should do it. Good tools can help to tear down the walls that are in the heads of some Admins.

Level 14

Very good, well thought out response.  We did a lot of cross training back in my Navy days.  I haven't seen much on the civilian side.

Level 10

Silos. Ping Pong Tickets. Pointing at each other. If you have been in IT long enough, I'm pretty sure you have seen or experienced this one way or another. Totally agree with networkautobahn​, these mentality hinder the IT organization to cope with the speed and requirement of the business, these are the hurdles that drag the IT from moving forward and the need to find a way to somehow connect the gap in order to make IT whole again is essential.

One monitoring tool for all is vital piece to solve this problem, to tear that wall, to break that barrier and to create a more focused and united IT Team. The more your IT is focused and engaged and comfortably sharing ideas, opinions, and helping each other rather than putting blame on another is the more it will become a contributor to the business success.

Having one monitoring tool for entire team will allow everybody to completely see the big picture, quickly identify the root cause, promotes healthy collaboration and proactively avoid outages. Cross Training has also proven effective is some companies but some are just not a fit for it. At the end of the day, we can all have one monitoring tool for the entire team but the success of this setup and its future will all depends on management. Unless the management has genuine concern for its people (not just in IT) and as long as the people does not feel the concern from it's higher ups, there will still be those that will just come to work and go home and not completely CARE.

Level 12

Great commentary and thread. While the primary comment briefly mentions software monitoring, I have spent a great deal of my time reaching out to application tech leads and managers to engage them and add SAM monitoring to their services and applications on their nodes. Network, database, hardware, switches, routers, et. al. are all critical components that must be monitored, but I have found that another spoke in the "wheel" of single glass monitoring is the applications and services themselves. Not only does app monitoring generally lead to issue resolution with server monitoring, the app owners generally have felt that a higher level of scrutiny on their apps leads to less finger pointing when incidents and problems occur. This "methodology" can be politacally painfull too at times as many in App development simply want to get their app in PROD and let OPS babysit it.

The next fun step is to attempt to use AppStack and more reporting to ID trends and remediate issues before they even alert out or cause downtime. Then we actually might be able to cross-train and reduce the impact of retirements, folks moving on, and generally make a better workplace...

Level 14

Outstanding post rschroeder

The us vs. them mentatilty is a killer. I think specialization plays a role in the creation of silos and C-LEVEL IT folks need to foster teamwork and appreciation for everyones knowedge and perspective..... Cakes are not made from one ingredient and neither are successful IT departments.

Level 15

great post.

Level 9

I can't stand that rice bowl mentality.

Level 11

Here is a thought, I have mentioned in the past in different forums.  Clean in house first.  In other words troubleshoot the problem with the team your on before you kick the can to another team.  Credibility is lost when a team or individual constantly points the problem in another direction, only to find out it could have been solved by the original team or individual by cleaning in house first.

Level 14

Absolutely, clean in house first.

Level 21

I have worked with many of these hardware vendor monitoring tools that you mention and I am always impressed with how terrible they are.  It makes me wonder why these vendors don't just partner with a company like SolarWinds to create a bundled solution.

MVP
MVP

because they can keep things proprietary and charge more money.

Level 20

I think most of us are on board with this monitoring thing... I used original CiscoWorks on Sun Net Manger before HP Openview really even took off... it was terrible!

Level 10

I have used the Logical Maps to break down silos. Taking the Network level and putting those devices into a Server groups view for reference, then add in Application Monitors from SAM you can hear the breaking of the silo walls. I've seen what took hours and days to troubleshoot be brought down to minutes.

A much smarter Tier 1 when you go across the silos or we like to say, flip over the maps. The services are down in those maps, you just have to work them up to a quality view.

Level 12

Can you post a screenshot?

Level 10

I would like to but I can't post a screen shot. Private network.

We have maps that start off two portals, one for the Network Engineers, the other for the servers and application teams. When you start off each portal we demark where one team starts and stops. We also dead end the diagrams so the alerts don't cascade all the way up to a router. So, when the Server teams maps goes red, they know it is theirs. We do allow each team to see each others portals and the demarks. Seeing the big picture is so cool for trouble shooting. A picture is worth a 1000 words, more than a 1000.

Visual Engineering, this is the art we practice along with alert management. We have over 700 maps but our tiered teams use these maps as the O&M diagram. We put a hook on the maps to the actual Network Engineering diagram, but for most troubleshooting folks don't pull up the NE diagram. Too much information. We also lay out the maps as services, so the ticket assignment to a service is derived from the map. Works like a charm but discipline is key. Process Engineering, People following the process and discipline.

About the Author
work for 15+ years in the networking industry. I have worked for many different sectors like industry, car manufactors and government. I am a monitoring enthusiast and have done Monitoring for large scale enviroments. I blog at networkautobahn.com and my recently started podcast can be found at networkbroadcaststorm.com