1 2 3 4 Previous Next

Geek Speak

1,593 posts

Network Gumshoe.pngNetwork admins definitely play the role of Network Gumshoe. Dealing with daily network issues like bandwidth hogs, IP conflicts, rogue users, and moreadministrators spend a considerable amount of time investigating and resolving network issues. But are they really equipped for this kind of troubleshooting? Is there a specific troubleshooting process involved in finding problematic users/devices while ensuring minimal downtime?

 

In a network, employees come in with devices pre-configured with IP addresses from prior Internet connections (home or elsewhere). This could result in an IP conflict with a critical application server that could cause an interruption of services. In other cases, IP conflicts happen when a network admin accidently assigns a duplicate IP address, or a rogue DHCP server operating in the network hands out IP addresses at will. Bandwidth issues creep up in the presence of a YouTube hog, or when someone misuses company resources for unofficial purposes. Finally, rogue users who’ve somehow gained entry to the network may attempt to access confidential data or restricted networks. All these frequently occurring incidents threaten to upset any smoothly functioning network.

 

In any case, the primary goal of a network admin is to fix an issue with minimal downtime and take steps to ensure that it doesn’t happen again. For issues associated with problematic users/devices in a network, here are four simple steps to follow when troubleshooting:

  • Quickly identify and investigate the problematic user/device.
  • Locate the problematic user/device.
  • Immediately remediate the problematic user/device.
  • Take steps to prevent the same situation from happening again.

Network Trouble Shooting Process.png

 

  1. To quickly detect problems in the network, it’s best to have a monitoring tool in place. Depending on which specific area of the network needs monitoring, admins can set up timely alerts and notifications. Specific monitoring tools are available to help, including those that let you see the up/down status of your devices, IP address space, user/device presence in the network, etc. Once the bandwidth hog, IP conflict, or rogue DHCP is identified, the first step of the troubleshooting process is complete.
  2. The next critical step is determining whether the user/device in question actually caused the problem. You need to look at detailed data that reveals the amount of bandwidth used, who used it, and for what application. You should also look at details on devices in an IP conflict and determine what type of conflict it was, look for the presence of unauthorized devices in the network, and so on. This investigation should also provide data on the location of the user/device in the network, including details like switch port information, or the Wireless Access Point (WAP), if it’s a wireless device.
  3. The third step is remediation. Whatever caused the network interruption needs to be fixed. Knowing the location of the problemas mentioned in the previous stepit’s very helpful in taking immediate steps. Admins can both physically locate the device and unplug network access, or they can use tools that enable the remote shutdown of devices. The remote facility especially helpful for admins working with networks spread over large areas or multiple locations. The critical point here is that network access needs to be revoked immediately.
  4. Finally, take steps to prevent the same problem from happening again. If it’s the case of a problematic user/device, make sure you block or notify entry of these systems into the network. Create check points and monitoring mechanisms so that you can take proactive measures and prevent unauthorized users from entering your network.

 

What troubleshooting processes do you follow in your organization? Feel free to share your experiences, which fellow network admins might find useful.

If you're running a network performance monitoring system, I'll bet you think you have visibility into your network.

 

I say you don't – or at least that your vision may be a bit more blurry than you realized.

 

Gathering Statistics

 

There are three kinds of lies: lies, d***ed lies, and statistics.

 

In reality there's nothing wrong with the statistics presented by a network performance management system so long as you understand the implications of the figures you're looking at, and don't take them as a literal truth. For example, when I set a utilization alarm threshold at 90% on a set of interfaces, what does 90% actually represent? If an application sends a burst of data and the interface is used at 95% of its capacity for 3 seconds, should that trigger an alarm? Of course not. How about if it's at 95% utilization for 3 minutes while a file transfer takes place; should that trigger an alarm? Maybe rather than triggering alarms on specific short term utilization peaks I should be forecasting on an hourly average utilization instead; that would even out the short term peaks while still tracking the overall load on the link.

 

And thus we touch on the murky world of network performance management and the statistical analysis that takes place in order to present the user with a single number to worry about. Each product developer will make their own decisions about how to process the numbers which means that given the same underlying data, each product on the market will likely generate slightly different results.

 

Garbage In, Garbage Out

 

Before any statistical analysis can take place, data must be gathered from the devices. "GIGO" implies that if the data are bad, the outputs will be bad, so what data are we gathering, and how good are they?

 

Monitoring systems will typically grab interface statistics every 5 minutes, and a standard MIB-II implementation can grab information such as:

 

  • ifSpeed (or ifHighSpeed); the speed of the interface
  • ifInOctets / ifHCInOctets (received octets, or bytes)
  • ifOutOctets / ifHCOutOctets (sent octets, or bytes)

 

Since there is no "current utilization" MIB entry, two polls are required to determine interface utilization. The first sets a baseline for the current in/out counters, and the second can be used to determine the delta (change) in those values. Multiply the deltas by 8 (convert bytes to bit), divide that by polling interval in seconds and I have bits per second values which I can use in conjunction with the interface speed to determine the utilization of the interface. Or rather, I can determine the mean utilization for that time period. If the polling interval is five minutes, do I really know what happened on the network in that time?

 

The charts below represent network interface utilization measured every 10 seconds over a five minute period:

 

50charts.png

 

All four charts have a mean utilization of 50% over that five minutes, so that's what a 5-minute poll would report. Do you still think you have visibility into your network?

 

Compromises

 

Network performance management is one big, bad set of compromises, and here are a few of that issues that make it challenging to get right:

 

  • Polling more often means more (and better resolution) data
  • More data means more storage
  • More data means more processing is required to "roll up" data for trending, or wide data range views, and so on.
  • How long should historical data be kept?
  • Is it ok to roll up data over a certain age and reduce the resolution? e.g. after 24 hours, take 1-minute polls and average them into 5-minute data points, and after a week average those into 15-minute data points, to reduce storage and processing?
  • Is the network management system able to cope with performing more frequent polling?
  • Can the end device cope with more frequent polling?
  • Can I temporarily get real-time high-resolution data for an interface when I'm troubleshooting?

 

What Do You Do?

 

There is no correct solution here, but what do you do? If you've tried doing 1-minute poll intervals, how did that work out for you in terms of the load on the polling system and on the devices being polled? Have storage requirements been a problem? Do you have any horror stories where the utilization on a chart masked a problem on the network (I do, for sure). How do you feel about losing precision on data older than a day (for example), or should data be left untouched? Do you have a better way to track utilization than SNMP polling? I'm also curious if you simply hadn't thought about this before; are you thinking about it now?

 

I'd love to hear from you what decisions (compromises?) you have made or what solutions you deployed when monitoring your network, and why. Maybe I can learn a trick or two!

The month of July is always special for various reasons. We officially welcome the summer heatwave, and it’s also one of those times where we look forward to taking a break and spending time with family. Another reason why July is special is because it’s time to give thanks to your friendly Systems Administrator, yes, the person you always call for help when you get locked out of your computer, or when your email stops working, or when the internet is down, or when your mouse stops responding, or when you need just about anything.

        

To all the SysAdmins and IT superheroes out there, SysAdmin Day is fast approaching. And this year, just like every year, we at SolarWinds dedicate the entire month of July to SysAdmins across the world and we invite you to join the festivities as we celebrate SysAdmin Day in the biggest possible way.

       

SysAdmin Day can mean a lot of things to IT pros and organizations. But, what I think makes SysAdmin Day a day to remember is being able to share your journey with fellow SysAdmins and IT pros. So, in the comment section, share why you chose this career path, the meaning behind those defining moments, or remind us about the day you knew you were going to be a SysAdmin. Take this time to also narrate funny instances or end-user stories that made you laugh or challenging situations you successfully dealt with in your very own server rooms.

IT Pro.png

We’re super thrilled about this year’s SysAdmin Day because we will have fun weekly blogs on Geek Speak to celebrate and a thwack monthly mission that offers weekly contests and exciting giveaways. Some of these sweet prizes include:

            

Now it’s time to get the party started. Visit the July thwack monthly mission page today!

SysAdmin Day 2015.png

What do we mean today when we talk about managing our environments in the cloud? In the old physical server days, we had diverse systems to manage the network, the storage, the server infrastructure. As time moved on, these systems began to merge into products like Spectrum and OpenView. There came to be many players in a space that involved quite often a vendor specific tool Your server manufacturer would often tie you in to a specific management tool.

 

Again, as time moved on, we began to see 3rd party tools built to specifications that used SNMP traps and API’s that were no longer unique to particular vendors and furthered the ability to monitor hardware for faults, and alert to high utilization, or failures. This helped our abilities extensively. But, were these adequate to handle the needs of a virtual environment? Well, in enterprises, we had our virtual management tools to give us good management for that infrastructure as well. However, we still had to dig into our storage and our networks to find hot-spots, so this was not going to allow us to expand our infrastructure to hybrid and to secondary environments.

 

This whole world changed drastically as we moved things to the cloud. Suddenly, we needed to manage workloads that weren’t necessarily housed on our own infrastructure, we needed to be able to move them dynamically, we needed to make sure that connectivity and storage in these remote sites as well as our own were able to be monitored within the same interface. Too many “Panes of Glass” were simply too demanding for our already overtaxed personnel. In addition, we were still in monitor but not remediate modes. We needed tools that could not only alert us to problems, but also to help us diagnose and repair the issues that arose, as they inevitably did, quickly and accurately. It was no longer enough to monitor our assets. We needed more.

 

Today with workloads sitting in public, managed, and private spaces, yet all ours to manage, we find ourselves in a quandary. How do we move them? How do we manage our storage? What about using new platforms like OpenStack or a variety of different hypervisors? These tools are getting better every day, they’re moving toward a model wherein your organization will be able to use whatever platform with whatever storage, and whatever networking you require to manage your workloads, your data, your backups, and move them about freely. We’re not there yet on any one, yet many are close.

 

In my opinion, the brass ring will be when we can live migrate workloads regardless of location virtualization platform, etc. To be sure, there are tools that will allow us to do clones and cutovers, but to move these live with no data loss and no impact to our user base as we desire to AWS, to our preferred provider or in and out of our own data centers is truly the way of the future.

If you asked Michael Jordan why he was so successful, he’d probably tell you because he spent four hours each day to practice shooting free-throws. The fundamental basics are everything.

 

“You can practice shooting eight hours a day, but if your technique is wrong, then all you become is very good at shooting the wrong way. Get the fundamentals down and the level of everything you do will rise.”

- Michael Jordan


This can be extended to all things and planning your storage environment is no exception. It is obvious as a storage administrator that you consider important parameters like device performance, storage consumption, total cost of ownership etc. to write a storage strategy. But have you given thought about basic things like understanding data growth or the importance business of data? They do have a large impact on day-to-day storage operations and thus the business. In this post, I will touch upon two points that you can consider in your storage strategy blueprint.


Analyze Your Data in More Ways than One 

 

Data forms the crux of your storage. So, before you draft your storage strategy you need to go through your data with a fine-tooth comb. You should have a basic understanding on where your data comes from, where it will reside, which data will occupy what kind of storage etc. It has been widely believed that in most enterprises, 80 % of data is not frequently accessed by business users. Since that is the case, then why is there a need for data to reside on a high performing storage arrays? Normally, only 20% of data is regularly needed by the business and is considered active. This allows you to place your 80% data on a lower cost solution that provides enough performance and reserves your high performing storage for active data.

 

Another overlooked factor is the business value of data. An employee leave balance record normally is not as important as quarterly financial projection. Understanding your data significance can help you assign storage accordingly.

 

The last step is understanding the life cycle of data. Information which is critical today may lose its importance in the long run. A regular audit on the data lifecycle will help you understand what data needs to be archived. In turn, allowing you to save storage space and budget. Having a good understanding of your data landscape will help you plan your future storage requirements more accurately.


Collaborate with Your Business Units Frequently

 

As a storage expert, running out of disk space is not an option, so staying on top of storage capacity is truly your top priority. But you may not be able to achieve this unless you frequently collaborate with the business units in your organization. With a storage monitoring tool, you can accurately predict when you will run out of free space, but that might not be sufficient. Why?

 

Here is an example: Consider you are planning on 50 TB of data growth for your 5 business units over the next year, 10 TB each. This is based on evaluating the previous year’s storage consumption for each business. Then your company decides to acquire a new company which needs an additional 30 TB of storage. Based on this scenario, you will be forced make a quick storage purchase, which will affect your limited budget.

 

By having a better understanding of the business unit’s plan, you could have made a plan to accommodate the additional storage requirements. In fact, in larger organizations, legal and compliance teams play an important role in shaping the storage strategy. These functions largely rely on storage teams to meet many mandatory regulatory requirements. Frequent collaboration with your company’s business units will equip you with the knowledge on how data is expected to grow in the future. This will allow you to understand your future storage needs and plan your budgets accordingly.

 

These are just a couple out of many aspects that contribute to a successful storage strategy. The two points above are subtle ones that can be easily missed if not or not fully understood. What are the other important factors that you take into account when mapping out your storage strategy? What are the common pitfalls that you faced while drafting a storage strategy? Share your experience.

TRIBUTE TO 'GEEK SPEAK AMBASSADORS'

I want to share with the thwack community the latest e-book that we developed at SolarWinds. Why is this so special and rewarding to thwack? Well, because we have picked out some great contributions by our esteemed Geek Speak ambassadors, put a creative spin to them, and presented them as an e-book with an instructive story and some fetching visuals and graphics.

 

AN ENGAGING E-BOOK TO BEHOLD

Outline: Meet Cameron (thinks of himself as the average, run-of-the-mill IT guy; but in reality is way more than that) who was called upon to manage an IT team of a mid-sized call center. This is a company that was growing rapidly in staff and business operations, but kept itself quite conservative in adopting new trends and technologies

 

In Cameron’s journal, you will get to read 4 chapters about how he confidently explores new avenues for changing the work culture in his IT team. Further, how he plans to implement new processes and practices towards greater productivity, teamwork, and successful help desk operations.

 

CAMERON’S CHRONICLES

This e-book is available 4 chapters. Just click on each topic to read the contents of that chapter.

Chapter 1: Building a Positive and Encouraging Help Desk Culture

Read now »

1.png

Chapter 2: Defining SLA for Successful Help Desk Operations

Read now »

2.png

Chapter 3: Developing Workflows Using Help Desk Data

Read now »

3+(2).png

Chapter 4: How to Fence Off Time for a Help Desk

Read now »

4.png

 

Tell us what you thought about this e-book and if you have other ideas in using and repurposing some of your awesome thwack posts and thought contributions.

 

You can also download the full e-book as a PDF from the link below.

scuff

What Is Your Why?

Posted by scuff Jun 29, 2015

Browsing the discussions and resources here in the Thwack community, I can see that you are an incredible bunch of people - passionate and knowledgeable about your own areas of expertise & eager to guide & advise others.

 

But, for a moment, let’s look up from what we are doing. Let’s talk about your Why.

I'm sure you all know what you do. That would be easy to tell someone, right? And you'd could probably also go into great detail about how you do it. But do you have a real vision for why you do what you do? Do you have a vision that extends beyond "because it's my job?" or "because the servers would fall over if I didn't?"

 

This isn't a deeply technical topic, but it can become your secret weapon. Your Why can lift you during a frustrating day. It can get you through a 1am call out (as well as all the caffeine). And it will be music to the ears of management or the business when you come to them with a problem or a recommendation. If you can frame your conversations with them to address their Why, I can guarantee you better success.

 

“I'm a sys admin. I monitor and maintain our servers. I do this to keep the business running. The end users can then serve their customers.” Easy so far, right?

So what does your company do for its customers, who are ultimately your customers? Is the product or service vision of your company as deeply ingrained in your mind as it is in the marketing department?

Does your company give peace of mind to people and help them in the toughest times … i.e. are you an insurance company?

 

So really, you give people peace of mind & help them in the toughest times by ensuring that your staff have fast access to their computer systems when they need them.

 

Tell me what your Why is? How do you think your Why can influence how you work & what decisions you make?

 

Simon Sinek explains Apple’s Why: https://www.youtube.com/watch?v=sioZd3AxmnE

 

P.S. If you are in this for the thrill of working with technology, I get that too.

As a child of the 80's, there are particular songs (like "Danger Zone", "Shout", "No Sleep Till Brooklyn") that really bring me back to the essence of that time period (RIP mullet). There are even times where current events take me back to a specific song. Take today’s "storage market" and all the technologies that are being discussed. Looking at this has me going back to the classic Public Enemy song, "Don't believe the hype…” There is so much "hype" in regard to different technologies that it can be overwhelming to keep up, let alone make a decision on what is right for your company. You also have to manage the pressures of business needs, storage performance needs, data protection, data growth, and resource constraints to just name a few. I might come off pro old-school IT, but I’m not. Ask yourself some of the questions below, and make sure the promise of these new trends makes sense for your business before you jump on the bandwagon.

 

Hyper-convergence

 

Hyper-convergence is a software infrastructure solution that combines compute, storage, networking, and virtualization resources in commodity boxes. The promise is integrated technologies that provide a single view for the administrator. This makes it easier for users to deploy, manage, grow, and support their environment because everything is tied together. This is great for quite a few environments, but is it great for "your" environment?  What do your VM workloads look like? Do they all have similar resource requirements or are some VMs more demanding than others? Does your resource needs (CPU, memory, storage, etc...) grow evenly or are some resources growing faster than others? 

 

If you’re considering a hyper-converged solution, check out this whitepaper: The Hyper-Convergence Effect: Do Management Requirements Change?

 

Solid State Drives

 

Solid state drives have been around for decades, but over the last few years have really grown with new technology advances (PCIe/NVMe) and the cost of flash has come down dramatically. The promise of SSD is higher performance, better durability, better cooling, and denser form factors.  This has led to claims that hard drives are dead and SSD (flash) is all that is needed in your data center. Is this right for "your" environment? Do you have a need for high performance across your entire environment?  What is your capacity growth and how does it compare to performance growth? Will your applications take advantage of SSDs? Do you have the budget for flash storage across all applications?

 

If you are considering making a move to Solid State drives, check out this SolarWinds Whitepaper: How do I know my SQL and Virtual Environments are ready for SSD?

 

Cloud Storage

 

For years people have been talking about "the cloud" and whether it’s private or public. For this we will talk public clouds. Over the last couple of years we have seen more businesses adopt cloud into their data storage environment. The promise is allowing companies to access their data anywhere, freeing up IT resources, providing scalability to grow your business, and reducing IT costs to name a few. This has led to the claims of everything going to the cloud and that keeping storage "on premise" is not ideal anymore. For many companies, this might be ideal, but is it ideal for "your" environment? What happens if there is an "outage," whether through the cloud provider or your connection to the cloud? Do you have the bandwidth to support your user’s access from an external location? What cloud provider are you using and are you locked in to that provider? How will you manage your data security and protect against attacks to that data?

 

These are just a few of the "storage technologies" that are currently being "hyped" in the market and each of them have a place in almost all data centers.  However, just because a new technology solves certain data center problems, does not mean it will solve "your" problems. Understanding your problems and where you want to take your business is the best way to be able to move past the "hype" of a new technology and really see the value that it will provide.

 

Now, what do you think? Is there too much "hype" in the storage market? What storage technology do you think is being over "hyped"?       

A couple of weeks ago, I wrote about how I love documentation and how useful it can be in the future, for you or someone else. I also lamented slightly about how it is hard on most projects to get the time to write good reference documentation. This week, I'm going to discuss one possible way to knock out some of the more time-consuming parts.

 

The key word is: Design. That is--the "Design" phase of the project. In most projects, regardless of the flavor of SDLC utilized, there is a block of time to be utilized for designing the solution. Since this is work that will be happening anyway, getting some good documentation out of it is simply a matter of writing down what gets discussed, decisions that get made (along with the "why"s, of course), and how the solution(s) will be structured. Chances are these things get written down, anyway, but outside the mindset of their possible use as future reference material. Usually, by its nature, designing a project will produce some useful artifacts; things like high-level architecture diagrams or possibly an ERD or two. If it's a data-integration or BI project, and one is into details, source-to-target mappings are likely generated at this point.

 

All of these items add up to a decent set of notes for the future, explaining the solution and where it came from. This way, even if no dedicated time can be worked into a project for documentation, useful items can be produced.

 

I think there's another benefit to this course of action. I have a phrase I use to describe my philosophy on this topic: I don't like to think while I'm writing ETL. This sounds kind of bad on the surface, but what I really mean is this: When it comes time to actually sling some code, I want to be able to concentrate on the mechanical details; Things like doing everything right, following best practices, security concerns, and making things as highly-performing as they can be, all right from the get-go. With the data flows already drawn out and the business rules documented and agreed upon, it is easy to focus on those things.

 

We had a fairly nice conversation about it being hard to get the time to write documentation when I wrote about it before. Would beefing up the efforts during the design phase help? Are ya'll even able to get good time for the design stage in the first place (I've been in predicaments where that gets axed, too)?

Vegaskid

Will the arms race ever end?

Posted by Vegaskid Jun 22, 2015

In my fourth post in my tenure as Thwack ambassador for June, I thought I would talk about what appears to be the never ending battle between good and bad. If I can get to the end without mentioning 'cyber' more than that single reference and various different coloured hats, then my work here will be done. The purpose of this post is to hopefully spark some discussion around the topic and I would love to hear what your take is.

 

Attacks on computer systems are nothing new. The web is full of stories of viruses that seem to go back further and further in time, the more you look. The first I am aware of is the creeper virus, which was realised on ARPANET way back in 1971, before even this oldie was born. Over forty years later and the anti virus vendors have still failed to deliver adequate protection against viruses, trojans, worms and other similar bad things that I will bundle together under the label of malware. The problem doesn't just stop at the deliberately malicious code. Software bugs, interoperability issues between different systems, 'helpful' developer back doors. It seems that no longer has one attack vector been patched up, than another 100 fill its place. Technology has for the longest time been used by both sides to get a leg up on the other.

 

The fact that technology and our pace of life is advancing at an ever increasing rate means that this cycle is getting ever more frequent. Personally, I feel that this is one of the key reasons why it will never end. That sounds a bit depressing but I am a realist at heart (often confused by the rose tinted spectacle wearing brigade as a sceptic) so I strongly believe that if you follow a number of best practices, some of which I highlighted in my first post (Defence in depth), keep up to date with relevant industry news and events and have a good internal culture including all staff being bought in, good documentation/processes and buy-in from the top down and we work together as a mature community, we give ourselves a better chance of being protected. It's not unreasonable to state that the majority of drive-by attackers will give up and move on if you present a big enough obstacle to penetrate. If you don't offer any real defences though, thinking all is all lost, you will almost certainly experience that as a self-fulfilling prophecy.

 

Let me know what your thoughts are on my scribbles above and what you think the battlefield will look like in 20 year's time.

"Shadow IT” refers to the IT systems, solutions, and services used by employees in an organization without the approval, knowledge, and support of the IT department. It is also referred to as “Stealth IT.” In its widely known usage, Shadow IT is a negative term and is mostly condemned by IT teams as these solutions are NOT in line with the organization's requirements for control, documentation, security, and compliance. Given that this increases the likelihood of unofficial and uncontrolled data flows, it makes it more difficult to comply with SOX, PCI DSS, FISMA, HIPAA, and many other regulatory compliance standards.

hidden-shadow.jpg

  

The growth of shadow IT in recent years can be attributed to the increasing consumerization of technology, cloud computing services, and freeware services online that are easy to acquire and deploy without going through the corporate IT department.

  • Usage of Dropbox and other hosted services for storing and exchanging corporate information can be shadow IT.
  • Installation and usage of non-IT-approved software on company-provided devices is also shadow IT. Whether it is installing a photo editing tool, music player, or a pastime game, if your IT regulations are against them, they can also be shadow IT.
  • BYOD, not in accordance with the IT policy, can contribute to shadow IT as IT teams have no way of finding out and protecting corporate data stored on personal devices.
  • Even usage of USB drives or CDs to copy corporate data from corporate devices can be considered shadow IT, if the company’s IT policy has mandated against it.

 

CHALLENGES & ADVERSE IMPACT OF SHADOW IT

The foremost challenge is upholding security and data integrity. We can risk exposure of sensitive data to sources outside the network firewall, and also risk letting malicious programs and malware into the network causing security breaches. Some companies take this very seriously and stipulate strict IT regulations which require IT administrator’s access to install new software on employee workstations. Some websites can also be blocked when on the corporate network if there are chances of employees exposing data thereat. These could be social media, hosted online services, personal email, etc.

 

There have been various instances of compliance violations and financial penalties for companies that have had their customer information hacked due to the presence of intrusive malware in an employee’s system, leading to massive data breaches. Should we even start talking about the data breaches on the cloud? It'll be an endless story.

 

Additionally, shadow IT sets the stage for asset management and software licensing issues. It becomes an onus on the IT department to constantly scan for non-IT-approved software and services being used by employees, and remove them according to policy.

 

SHOULD SHADOW IT ALWAYS REMAIN A TABOO?

This is a debatable question because there are instances where shadow IT can be useful to employees. If IT policies and new software procurement procedures are too bureaucratic and time-consuming and employees can get the job done quickly by resorting to use free tools available online, then—from a business perspective—why not? There are also arguments that, when implemented properly, shadow IT can spur innovation. Organizations can find faster and more productive means of doing work with newer and cheaper technologies.

 

What is your take on shadow IT? No doubt it comes with more bane than boon. How does your organization deal it?

Fresh out of high school, I got a job working in a large bank.  My favorite task was inputting sales figures into a DOS based system and watching it crash when I tried to print reports. I extended my high school computing knowledge by working over the phone with second level support. I confessed that I could understand what they were doing, but I wouldn’t know where to start.

 

They saw some raw potential in me and invited me onto an IT project to roll out new computers to the bank branches nationwide. I was to learn everything on the job: DOS commands, device drivers, Windows NT architecture, TCP/IP addressing, token ring networks, SMTP communications and LDAP architecture. My first IT task was cloning computers off an image on a portable backpack, when the registry didn’t exist & Windows was just a file copy away.

 

Very little was done with a GUI. The only wizard was Windows Installer. When you learn about IT from the ground up and you understand the concepts, you can then troubleshoot. When things didn’t work (and you couldn’t just google the error) I could apply my knowledge of how it should be working, know where to start with some resolution steps and ‘trial and error’ my way out of it. Now, that knowledge means I can cut through the noise of thousands of search results and apply ‘related but not quite the same’ scenarios until I find a resolution.

 

More than one person has commented to me that this generation of teenagers doesn't have the same tech-savvy skills as the previous one, because they are used to consumer IT devices that generally just work. So, in a world where technology is more prevalent than ever, do we get back to basics enough? Do we teach the mechanics of it all to those new to the industry or to someone looking for a solution? Or do we just point them to the fix?

Data people and developers don't really get along very well. There have been lots of reasons for this historically, and, I'm guessing, as time goes on, there will continue to be conflict between these two groups of people. Sometimes these disagreements are petty; others are more fundamental. One of the areas I've seen cause the most strife is shared project work using an Agile software development lifecycle. I know talking about Agile methodologies and data-related projects/items in the same sentence is a recipe for a serious religious battle, but here I want to keep the conversation to a specific couple of items.

 

The first of these two items is what happens when an application is developed using an ORM and a language that allow the dev team to not focus on the database or its design. Instead, the engineer(s) only need to write code and allow the ORM to design and build the database schema underneath. (Although this has been around for longer than Agile processes have been, I've seen a lot more of it on Agile projects.) This can lead to disconnects for a Development DBA-type person tasked with ensuring good database performance for the new application or for a Business Intelligence developer extracting data to supply to a Data Mart and/or Warehouse.

 

Kind of by its nature, this use of an ORM means the data model might not be done until a lot of the application itself is developed…and this might be pretty late in the game. In ETL Land, development can't really start until actual columns and tables exist. Furthermore, if anything changes later on, it can be a lot of effort to make changes in ETL processes. For a DBA that is interested in performance-tuning new data objects/elements, there may not be much here to do--the model is defined in code and there isn't an abstraction layer that can "protect" the application from changes the DBA may want to make to improve performance.

 

The other problem affects Business Intelligence projects a little more specifically. In my experience, it's easy for answers to "why" questions that have already been asked to get lost in the documentation of User Stories and their associated Acceptance Criteria. Addressing "why" data elements are defined the way they are is super-important to designing useful BI solutions. Of course, the BI developer is going to want/need to talk to the SMEs directly, but there isn't always time for this allotted during an Agile project's schedule.

 

I've found the best way to handle all this is focusing on an old problem in IT and one of the fundamental tenants of the Agile method: Communication. I'll also follow that up with a close second-place: Teamwork. Of course, these things should be going on from Day 1 with any project…but they are especially important if either item discussed above are trying to cause major problems on a project. As data people, we should work with the development team (and the Business Analysts, if applicable) from the get-go, participating in early business-y discussions so we can get all of the backstory. We can help the dev team with data design to an extent, too. From a pure DBA perspective, there's still an opportunity to work on indexing strategies in this scenario, but it takes good communication.

 

Nosing into this process will take some convincing if a shop's process is already pretty stable. It may even involve "volunteering" some time for the first couple projects, but I'm pretty confident that everyone will quickly see the benefits, both in quality of project outcome and the amount of time anyone is "waiting" on the data team.

 

I've had mixed feelings (and results) working this type of project, but with good, open communication, things can go alright. For readers who have been on these projects, how have they gone? Are data folks included directly as part of the Agile team? Has that helped make things run more smoothly?

IT pros now have the added responsibilities of having to know how to troubleshoot performance issues in apps and servers that are hosted remotely, in addition to monitoring and managing servers and apps that are hosted locally. This is where tools like Windows Remote Management (WinRM) come handy because it allows you to remotely manage, monitor, and troubleshoot applications and Windows server performance.

                   

WinRM is based on Web Services Management (WS-Management) which uses Simple Object Access Protocol (SOAP) requests to communicate with remote and local hosts, multi-vendor server hardware, operating systems, and applications. If you are predominately using a Windows environment, then WinRM will provide you remote management capabilities to do the following:

  • Communicate with remote hosts using a port that is always open by firewalls and client machines on a network.
  • Quickly start working in a cloud environment and remotely configure WinRM on EC2, Azure, etc. and monitor the performance of apps in such environments.
  • Ensure smoother execution and configuration for monitoring and managing apps and servers hosted remotely.

        

Configuring WinRM

For those who rely on PowerShell scripts to monitor applications running in remote hosts, you will first need to configure WinRM. But, this isn’t as easy as it sounds. This process is error prone, tedious, and time consuming, especially when you have a really large environment. In order to get started, you will need to enable Windows firewall on the server you want to configure WinRM. Here is a link to a blog that explains step-by-step how to configure WinRM on every computer or server. Key steps include:

WinRM.png

           

Alternative: Automate WinRM Configuration

Unfortunately, manual methods can take up too much of your time, especially if you have multiple apps and servers. With automated WinRM configuration, remotely executing PowerShell scripts can be achieved in minutes. SolarWinds Free Tool, Remote Execution Enabler for PowerShell, helps you configure WinRM on all your servers in a few minutes.

  • Configure WinRM on local and remote servers.
  • Bulk configuration across multiple hosts.
  • Automatically generate and distribute certificates for encrypted remote PowerShell execution.

          

Download the free tool here.

            

How do you manage your servers and apps that are hosted remotely? Is it using an automated platform, PowerShell scripts, or manual processes? Whatever the case, drop a line in the comments section.

In my previous blog, I discussed the difficulties of manual IP address management (IPAM). Manual management can result in poor visibility, inefficient operations, compromised security, and the inability to meet audit requirements for compliance. Many of the comments on the blog swayed towards shifting/using an automated solution. Here are 4 basic best practices for role delegation as an essential criteria for efficient IPAM.

 

Effective distribution of responsibility across, and within teams: Access control is an important consideration when multiple users have access to the IPAM system.

As IP administration operations touch several teams, it is recommended to:

  • Distribute tasks based on responsibilities and expertise of teams and individuals.
  • Securely delegate IP management tasks to different administrators without affecting current management practices.
  • Avoid bottlenecks, inefficiencies and errors while configuring different systems, accessing an available IP, or while making DHCP/DNS changes.

For example, the server team can delegate management of DNS/DHCP and IPAM functions to the network team while keeping control of the rest of the Windows server functionality. Network teams in turn can divide responsibilities based on the location or expertise within the group and delegate even simpler tasks, like new IP address assignments to the IT Helpdesk.


Different admins have unique role-based control: Role-based control helps ensure secure delegation of management tasks. Various role definitions permit different levels of access restrictions and also help track changes. This way you can maintain security without limiting the ability to delegate required IP management activities. Some examples of role-based control are:

  1. Administrator role or the Super User - full read/write access, initiate scans to all subnets, manage credentials for other roles, create custom fields, and full access to DHCP management and DNS monitoring.
  2. Power Users - varied permissions/access rights restricted to managing subnets and IP addresses only, management of supernet and group properties, and creation of custom data fields on portions of the network made available by the site administrator.
  3. Operator - access to the addition/deletion of IP address ranges and the ability to edit subnet status and IP address properties on the allowed portions of the network.
  4. Read Only Users - have only read access to DHCP servers, scopes, leases, reservations, and DNS servers, zones, records.
  5. Custom access - where the role is defined on a per subnet basis. DHCP and DNS access depends on the Global Account setting.
  6. Hide - Restrict all access to DHCP & DNS management.

Ultimately, control lies with the super user who can assign roles as per the needs and requirements of the network or organization.


Administering and inheriting rights: Setup and assignment of roles need to be easy and less time consuming. The effectiveness of an IPAM lies in the ease of management of the system itself. Many automated IPAM solutions are integrated with Windows Active Directory (commonly used in networks) making it easier to create and assign user roles for IPAM. Built-in role definitions help quickly assign and delegate IPAM tasks to different users.


Change approval or auditing: Compliance standards require that all changes made to the IP address pool be recorded and change history for IP addresses be maintained. Any change in the IP management structure of IP address, DHCP & DNS management must be logged separately, and maintained centrally.

A permissioned access system ensures that only approved/authorized personnel are allowed to make changes to IP address assignments. Ideally, an IP management system should allow administrative access to be delegated by subnet.

 

Maintaining a log for changes helps avoid errors and also simplifies the process of troubleshooting and rollback of unintended administrative changes. Automated IPAM solutions enable auditing by recording every change in the database along with the user name, date, time of modification, and details of the change. The audit details are published as reports and can be exported, emailed or retrieved as required for further analysis. Some examples of these reports are: Unused IP Addresses, Reserved-Static IP Addresses, IP Usage Summary, etc.


Conclusion

In conclusion, it’s quite clear that manual managing IP addresses can be a resource drain. On the other hand, investing in a good IPAM solution provides you with effective IPAM options. More importantly, tangible business benefits, including a compelling return on investment.


Do you agree that role delegation does help ease some load off the network administrator’s back?

Filter Blog

By date:
By tag: