1 2 3 4 Previous Next

Geek Speak

1,591 posts

The month of July is always special for various reasons. We officially welcome the summer heatwave, and it’s also one of those times where we look forward to taking a break and spending time with family. Another reason why July is special is because it’s time to give thanks to your friendly Systems Administrator, yes, the person you always call for help when you get locked out of your computer, or when your email stops working, or when the internet is down, or when your mouse stops responding, or when you need just about anything.

        

To all the SysAdmins and IT superheroes out there, SysAdmin Day is fast approaching. And this year, just like every year, we at SolarWinds dedicate the entire month of July to SysAdmins across the world and we invite you to join the festivities as we celebrate SysAdmin Day in the biggest possible way.

       

SysAdmin Day can mean a lot of things to IT pros and organizations. But, what I think makes SysAdmin Day a day to remember is being able to share your journey with fellow SysAdmins and IT pros. So, in the comment section, share why you chose this career path, the meaning behind those defining moments, or remind us about the day you knew you were going to be a SysAdmin. Take this time to also narrate funny instances or end-user stories that made you laugh or challenging situations you successfully dealt with in your very own server rooms.

IT Pro.png

We’re super thrilled about this year’s SysAdmin Day because we will have fun weekly blogs on Geek Speak to celebrate and a thwack monthly mission that offers weekly contests and exciting giveaways. Some of these sweet prizes include:

            

Now it’s time to get the party started. Visit the July thwack monthly mission page today!

SysAdmin Day 2015.png

What do we mean today when we talk about managing our environments in the cloud? In the old physical server days, we had diverse systems to manage the network, the storage, the server infrastructure. As time moved on, these systems began to merge into products like Spectrum and OpenView. There came to be many players in a space that involved quite often a vendor specific tool Your server manufacturer would often tie you in to a specific management tool.

 

Again, as time moved on, we began to see 3rd party tools built to specifications that used SNMP traps and API’s that were no longer unique to particular vendors and furthered the ability to monitor hardware for faults, and alert to high utilization, or failures. This helped our abilities extensively. But, were these adequate to handle the needs of a virtual environment? Well, in enterprises, we had our virtual management tools to give us good management for that infrastructure as well. However, we still had to dig into our storage and our networks to find hot-spots, so this was not going to allow us to expand our infrastructure to hybrid and to secondary environments.

 

This whole world changed drastically as we moved things to the cloud. Suddenly, we needed to manage workloads that weren’t necessarily housed on our own infrastructure, we needed to be able to move them dynamically, we needed to make sure that connectivity and storage in these remote sites as well as our own were able to be monitored within the same interface. Too many “Panes of Glass” were simply too demanding for our already overtaxed personnel. In addition, we were still in monitor but not remediate modes. We needed tools that could not only alert us to problems, but also to help us diagnose and repair the issues that arose, as they inevitably did, quickly and accurately. It was no longer enough to monitor our assets. We needed more.

 

Today with workloads sitting in public, managed, and private spaces, yet all ours to manage, we find ourselves in a quandary. How do we move them? How do we manage our storage? What about using new platforms like OpenStack or a variety of different hypervisors? These tools are getting better every day, they’re moving toward a model wherein your organization will be able to use whatever platform with whatever storage, and whatever networking you require to manage your workloads, your data, your backups, and move them about freely. We’re not there yet on any one, yet many are close.

 

In my opinion, the brass ring will be when we can live migrate workloads regardless of location virtualization platform, etc. To be sure, there are tools that will allow us to do clones and cutovers, but to move these live with no data loss and no impact to our user base as we desire to AWS, to our preferred provider or in and out of our own data centers is truly the way of the future.

If you asked Michael Jordan why he was so successful, he’d probably tell you because he spent four hours each day to practice shooting free-throws. The fundamental basics are everything.

 

“You can practice shooting eight hours a day, but if your technique is wrong, then all you become is very good at shooting the wrong way. Get the fundamentals down and the level of everything you do will rise.”

- Michael Jordan


This can be extended to all things and planning your storage environment is no exception. It is obvious as a storage administrator that you consider important parameters like device performance, storage consumption, total cost of ownership etc. to write a storage strategy. But have you given thought about basic things like understanding data growth or the importance business of data? They do have a large impact on day-to-day storage operations and thus the business. In this post, I will touch upon two points that you can consider in your storage strategy blueprint.


Analyze Your Data in More Ways than One 

 

Data forms the crux of your storage. So, before you draft your storage strategy you need to go through your data with a fine-tooth comb. You should have a basic understanding on where your data comes from, where it will reside, which data will occupy what kind of storage etc. It has been widely believed that in most enterprises, 80 % of data is not frequently accessed by business users. Since that is the case, then why is there a need for data to reside on a high performing storage arrays? Normally, only 20% of data is regularly needed by the business and is considered active. This allows you to place your 80% data on a lower cost solution that provides enough performance and reserves your high performing storage for active data.

 

Another overlooked factor is the business value of data. An employee leave balance record normally is not as important as quarterly financial projection. Understanding your data significance can help you assign storage accordingly.

 

The last step is understanding the life cycle of data. Information which is critical today may lose its importance in the long run. A regular audit on the data lifecycle will help you understand what data needs to be archived. In turn, allowing you to save storage space and budget. Having a good understanding of your data landscape will help you plan your future storage requirements more accurately.


Collaborate with Your Business Units Frequently

 

As a storage expert, running out of disk space is not an option, so staying on top of storage capacity is truly your top priority. But you may not be able to achieve this unless you frequently collaborate with the business units in your organization. With a storage monitoring tool, you can accurately predict when you will run out of free space, but that might not be sufficient. Why?

 

Here is an example: Consider you are planning on 50 TB of data growth for your 5 business units over the next year, 10 TB each. This is based on evaluating the previous year’s storage consumption for each business. Then your company decides to acquire a new company which needs an additional 30 TB of storage. Based on this scenario, you will be forced make a quick storage purchase, which will affect your limited budget.

 

By having a better understanding of the business unit’s plan, you could have made a plan to accommodate the additional storage requirements. In fact, in larger organizations, legal and compliance teams play an important role in shaping the storage strategy. These functions largely rely on storage teams to meet many mandatory regulatory requirements. Frequent collaboration with your company’s business units will equip you with the knowledge on how data is expected to grow in the future. This will allow you to understand your future storage needs and plan your budgets accordingly.

 

These are just a couple out of many aspects that contribute to a successful storage strategy. The two points above are subtle ones that can be easily missed if not or not fully understood. What are the other important factors that you take into account when mapping out your storage strategy? What are the common pitfalls that you faced while drafting a storage strategy? Share your experience.

TRIBUTE TO 'GEEK SPEAK AMBASSADORS'

I want to share with the thwack community the latest e-book that we developed at SolarWinds. Why is this so special and rewarding to thwack? Well, because we have picked out some great contributions by our esteemed Geek Speak ambassadors, put a creative spin to them, and presented them as an e-book with an instructive story and some fetching visuals and graphics.

 

AN ENGAGING E-BOOK TO BEHOLD

Outline: Meet Cameron (thinks of himself as the average, run-of-the-mill IT guy; but in reality is way more than that) who was called upon to manage an IT team of a mid-sized call center. This is a company that was growing rapidly in staff and business operations, but kept itself quite conservative in adopting new trends and technologies

 

In Cameron’s journal, you will get to read 4 chapters about how he confidently explores new avenues for changing the work culture in his IT team. Further, how he plans to implement new processes and practices towards greater productivity, teamwork, and successful help desk operations.

 

CAMERON’S CHRONICLES

This e-book is available 4 chapters. Just click on each topic to read the contents of that chapter.

Chapter 1: Building a Positive and Encouraging Help Desk Culture

Read now »

1.png

Chapter 2: Defining SLA for Successful Help Desk Operations

Read now »

2.png

Chapter 3: Developing Workflows Using Help Desk Data

Read now »

3+(2).png

Chapter 4: How to Fence Off Time for a Help Desk

Read now »

4.png

 

Tell us what you thought about this e-book and if you have other ideas in using and repurposing some of your awesome thwack posts and thought contributions.

 

You can also download the full e-book as a PDF from the link below.

scuff

What Is Your Why?

Posted by scuff Jun 29, 2015

Browsing the discussions and resources here in the Thwack community, I can see that you are an incredible bunch of people - passionate and knowledgeable about your own areas of expertise & eager to guide & advise others.

 

But, for a moment, let’s look up from what we are doing. Let’s talk about your Why.

I'm sure you all know what you do. That would be easy to tell someone, right? And you'd could probably also go into great detail about how you do it. But do you have a real vision for why you do what you do? Do you have a vision that extends beyond "because it's my job?" or "because the servers would fall over if I didn't?"

 

This isn't a deeply technical topic, but it can become your secret weapon. Your Why can lift you during a frustrating day. It can get you through a 1am call out (as well as all the caffeine). And it will be music to the ears of management or the business when you come to them with a problem or a recommendation. If you can frame your conversations with them to address their Why, I can guarantee you better success.

 

“I'm a sys admin. I monitor and maintain our servers. I do this to keep the business running. The end users can then serve their customers.” Easy so far, right?

So what does your company do for its customers, who are ultimately your customers? Is the product or service vision of your company as deeply ingrained in your mind as it is in the marketing department?

Does your company give peace of mind to people and help them in the toughest times … i.e. are you an insurance company?

 

So really, you give people peace of mind & help them in the toughest times by ensuring that your staff have fast access to their computer systems when they need them.

 

Tell me what your Why is? How do you think your Why can influence how you work & what decisions you make?

 

Simon Sinek explains Apple’s Why: https://www.youtube.com/watch?v=sioZd3AxmnE

 

P.S. If you are in this for the thrill of working with technology, I get that too.

As a child of the 80's, there are particular songs (like "Danger Zone", "Shout", "No Sleep Till Brooklyn") that really bring me back to the essence of that time period (RIP mullet). There are even times where current events take me back to a specific song. Take today’s "storage market" and all the technologies that are being discussed. Looking at this has me going back to the classic Public Enemy song, "Don't believe the hype…” There is so much "hype" in regard to different technologies that it can be overwhelming to keep up, let alone make a decision on what is right for your company. You also have to manage the pressures of business needs, storage performance needs, data protection, data growth, and resource constraints to just name a few. I might come off pro old-school IT, but I’m not. Ask yourself some of the questions below, and make sure the promise of these new trends makes sense for your business before you jump on the bandwagon.

 

Hyper-convergence

 

Hyper-convergence is a software infrastructure solution that combines compute, storage, networking, and virtualization resources in commodity boxes. The promise is integrated technologies that provide a single view for the administrator. This makes it easier for users to deploy, manage, grow, and support their environment because everything is tied together. This is great for quite a few environments, but is it great for "your" environment?  What do your VM workloads look like? Do they all have similar resource requirements or are some VMs more demanding than others? Does your resource needs (CPU, memory, storage, etc...) grow evenly or are some resources growing faster than others? 

 

If you’re considering a hyper-converged solution, check out this whitepaper: The Hyper-Convergence Effect: Do Management Requirements Change?

 

Solid State Drives

 

Solid state drives have been around for decades, but over the last few years have really grown with new technology advances (PCIe/NVMe) and the cost of flash has come down dramatically. The promise of SSD is higher performance, better durability, better cooling, and denser form factors.  This has led to claims that hard drives are dead and SSD (flash) is all that is needed in your data center. Is this right for "your" environment? Do you have a need for high performance across your entire environment?  What is your capacity growth and how does it compare to performance growth? Will your applications take advantage of SSDs? Do you have the budget for flash storage across all applications?

 

If you are considering making a move to Solid State drives, check out this SolarWinds Whitepaper: How do I know my SQL and Virtual Environments are ready for SSD?

 

Cloud Storage

 

For years people have been talking about "the cloud" and whether it’s private or public. For this we will talk public clouds. Over the last couple of years we have seen more businesses adopt cloud into their data storage environment. The promise is allowing companies to access their data anywhere, freeing up IT resources, providing scalability to grow your business, and reducing IT costs to name a few. This has led to the claims of everything going to the cloud and that keeping storage "on premise" is not ideal anymore. For many companies, this might be ideal, but is it ideal for "your" environment? What happens if there is an "outage," whether through the cloud provider or your connection to the cloud? Do you have the bandwidth to support your user’s access from an external location? What cloud provider are you using and are you locked in to that provider? How will you manage your data security and protect against attacks to that data?

 

These are just a few of the "storage technologies" that are currently being "hyped" in the market and each of them have a place in almost all data centers.  However, just because a new technology solves certain data center problems, does not mean it will solve "your" problems. Understanding your problems and where you want to take your business is the best way to be able to move past the "hype" of a new technology and really see the value that it will provide.

 

Now, what do you think? Is there too much "hype" in the storage market? What storage technology do you think is being over "hyped"?       

A couple of weeks ago, I wrote about how I love documentation and how useful it can be in the future, for you or someone else. I also lamented slightly about how it is hard on most projects to get the time to write good reference documentation. This week, I'm going to discuss one possible way to knock out some of the more time-consuming parts.

 

The key word is: Design. That is--the "Design" phase of the project. In most projects, regardless of the flavor of SDLC utilized, there is a block of time to be utilized for designing the solution. Since this is work that will be happening anyway, getting some good documentation out of it is simply a matter of writing down what gets discussed, decisions that get made (along with the "why"s, of course), and how the solution(s) will be structured. Chances are these things get written down, anyway, but outside the mindset of their possible use as future reference material. Usually, by its nature, designing a project will produce some useful artifacts; things like high-level architecture diagrams or possibly an ERD or two. If it's a data-integration or BI project, and one is into details, source-to-target mappings are likely generated at this point.

 

All of these items add up to a decent set of notes for the future, explaining the solution and where it came from. This way, even if no dedicated time can be worked into a project for documentation, useful items can be produced.

 

I think there's another benefit to this course of action. I have a phrase I use to describe my philosophy on this topic: I don't like to think while I'm writing ETL. This sounds kind of bad on the surface, but what I really mean is this: When it comes time to actually sling some code, I want to be able to concentrate on the mechanical details; Things like doing everything right, following best practices, security concerns, and making things as highly-performing as they can be, all right from the get-go. With the data flows already drawn out and the business rules documented and agreed upon, it is easy to focus on those things.

 

We had a fairly nice conversation about it being hard to get the time to write documentation when I wrote about it before. Would beefing up the efforts during the design phase help? Are ya'll even able to get good time for the design stage in the first place (I've been in predicaments where that gets axed, too)?

Vegaskid

Will the arms race ever end?

Posted by Vegaskid Jun 22, 2015

In my fourth post in my tenure as Thwack ambassador for June, I thought I would talk about what appears to be the never ending battle between good and bad. If I can get to the end without mentioning 'cyber' more than that single reference and various different coloured hats, then my work here will be done. The purpose of this post is to hopefully spark some discussion around the topic and I would love to hear what your take is.

 

Attacks on computer systems are nothing new. The web is full of stories of viruses that seem to go back further and further in time, the more you look. The first I am aware of is the creeper virus, which was realised on ARPANET way back in 1971, before even this oldie was born. Over forty years later and the anti virus vendors have still failed to deliver adequate protection against viruses, trojans, worms and other similar bad things that I will bundle together under the label of malware. The problem doesn't just stop at the deliberately malicious code. Software bugs, interoperability issues between different systems, 'helpful' developer back doors. It seems that no longer has one attack vector been patched up, than another 100 fill its place. Technology has for the longest time been used by both sides to get a leg up on the other.

 

The fact that technology and our pace of life is advancing at an ever increasing rate means that this cycle is getting ever more frequent. Personally, I feel that this is one of the key reasons why it will never end. That sounds a bit depressing but I am a realist at heart (often confused by the rose tinted spectacle wearing brigade as a sceptic) so I strongly believe that if you follow a number of best practices, some of which I highlighted in my first post (Defence in depth), keep up to date with relevant industry news and events and have a good internal culture including all staff being bought in, good documentation/processes and buy-in from the top down and we work together as a mature community, we give ourselves a better chance of being protected. It's not unreasonable to state that the majority of drive-by attackers will give up and move on if you present a big enough obstacle to penetrate. If you don't offer any real defences though, thinking all is all lost, you will almost certainly experience that as a self-fulfilling prophecy.

 

Let me know what your thoughts are on my scribbles above and what you think the battlefield will look like in 20 year's time.

"Shadow IT” refers to the IT systems, solutions, and services used by employees in an organization without the approval, knowledge, and support of the IT department. It is also referred to as “Stealth IT.” In its widely known usage, Shadow IT is a negative term and is mostly condemned by IT teams as these solutions are NOT in line with the organization's requirements for control, documentation, security, and compliance. Given that this increases the likelihood of unofficial and uncontrolled data flows, it makes it more difficult to comply with SOX, PCI DSS, FISMA, HIPAA, and many other regulatory compliance standards.

hidden-shadow.jpg

  

The growth of shadow IT in recent years can be attributed to the increasing consumerization of technology, cloud computing services, and freeware services online that are easy to acquire and deploy without going through the corporate IT department.

  • Usage of Dropbox and other hosted services for storing and exchanging corporate information can be shadow IT.
  • Installation and usage of non-IT-approved software on company-provided devices is also shadow IT. Whether it is installing a photo editing tool, music player, or a pastime game, if your IT regulations are against them, they can also be shadow IT.
  • BYOD, not in accordance with the IT policy, can contribute to shadow IT as IT teams have no way of finding out and protecting corporate data stored on personal devices.
  • Even usage of USB drives or CDs to copy corporate data from corporate devices can be considered shadow IT, if the company’s IT policy has mandated against it.

 

CHALLENGES & ADVERSE IMPACT OF SHADOW IT

The foremost challenge is upholding security and data integrity. We can risk exposure of sensitive data to sources outside the network firewall, and also risk letting malicious programs and malware into the network causing security breaches. Some companies take this very seriously and stipulate strict IT regulations which require IT administrator’s access to install new software on employee workstations. Some websites can also be blocked when on the corporate network if there are chances of employees exposing data thereat. These could be social media, hosted online services, personal email, etc.

 

There have been various instances of compliance violations and financial penalties for companies that have had their customer information hacked due to the presence of intrusive malware in an employee’s system, leading to massive data breaches. Should we even start talking about the data breaches on the cloud? It'll be an endless story.

 

Additionally, shadow IT sets the stage for asset management and software licensing issues. It becomes an onus on the IT department to constantly scan for non-IT-approved software and services being used by employees, and remove them according to policy.

 

SHOULD SHADOW IT ALWAYS REMAIN A TABOO?

This is a debatable question because there are instances where shadow IT can be useful to employees. If IT policies and new software procurement procedures are too bureaucratic and time-consuming and employees can get the job done quickly by resorting to use free tools available online, then—from a business perspective—why not? There are also arguments that, when implemented properly, shadow IT can spur innovation. Organizations can find faster and more productive means of doing work with newer and cheaper technologies.

 

What is your take on shadow IT? No doubt it comes with more bane than boon. How does your organization deal it?

Fresh out of high school, I got a job working in a large bank.  My favorite task was inputting sales figures into a DOS based system and watching it crash when I tried to print reports. I extended my high school computing knowledge by working over the phone with second level support. I confessed that I could understand what they were doing, but I wouldn’t know where to start.

 

They saw some raw potential in me and invited me onto an IT project to roll out new computers to the bank branches nationwide. I was to learn everything on the job: DOS commands, device drivers, Windows NT architecture, TCP/IP addressing, token ring networks, SMTP communications and LDAP architecture. My first IT task was cloning computers off an image on a portable backpack, when the registry didn’t exist & Windows was just a file copy away.

 

Very little was done with a GUI. The only wizard was Windows Installer. When you learn about IT from the ground up and you understand the concepts, you can then troubleshoot. When things didn’t work (and you couldn’t just google the error) I could apply my knowledge of how it should be working, know where to start with some resolution steps and ‘trial and error’ my way out of it. Now, that knowledge means I can cut through the noise of thousands of search results and apply ‘related but not quite the same’ scenarios until I find a resolution.

 

More than one person has commented to me that this generation of teenagers doesn't have the same tech-savvy skills as the previous one, because they are used to consumer IT devices that generally just work. So, in a world where technology is more prevalent than ever, do we get back to basics enough? Do we teach the mechanics of it all to those new to the industry or to someone looking for a solution? Or do we just point them to the fix?

Data people and developers don't really get along very well. There have been lots of reasons for this historically, and, I'm guessing, as time goes on, there will continue to be conflict between these two groups of people. Sometimes these disagreements are petty; others are more fundamental. One of the areas I've seen cause the most strife is shared project work using an Agile software development lifecycle. I know talking about Agile methodologies and data-related projects/items in the same sentence is a recipe for a serious religious battle, but here I want to keep the conversation to a specific couple of items.

 

The first of these two items is what happens when an application is developed using an ORM and a language that allow the dev team to not focus on the database or its design. Instead, the engineer(s) only need to write code and allow the ORM to design and build the database schema underneath. (Although this has been around for longer than Agile processes have been, I've seen a lot more of it on Agile projects.) This can lead to disconnects for a Development DBA-type person tasked with ensuring good database performance for the new application or for a Business Intelligence developer extracting data to supply to a Data Mart and/or Warehouse.

 

Kind of by its nature, this use of an ORM means the data model might not be done until a lot of the application itself is developed…and this might be pretty late in the game. In ETL Land, development can't really start until actual columns and tables exist. Furthermore, if anything changes later on, it can be a lot of effort to make changes in ETL processes. For a DBA that is interested in performance-tuning new data objects/elements, there may not be much here to do--the model is defined in code and there isn't an abstraction layer that can "protect" the application from changes the DBA may want to make to improve performance.

 

The other problem affects Business Intelligence projects a little more specifically. In my experience, it's easy for answers to "why" questions that have already been asked to get lost in the documentation of User Stories and their associated Acceptance Criteria. Addressing "why" data elements are defined the way they are is super-important to designing useful BI solutions. Of course, the BI developer is going to want/need to talk to the SMEs directly, but there isn't always time for this allotted during an Agile project's schedule.

 

I've found the best way to handle all this is focusing on an old problem in IT and one of the fundamental tenants of the Agile method: Communication. I'll also follow that up with a close second-place: Teamwork. Of course, these things should be going on from Day 1 with any project…but they are especially important if either item discussed above are trying to cause major problems on a project. As data people, we should work with the development team (and the Business Analysts, if applicable) from the get-go, participating in early business-y discussions so we can get all of the backstory. We can help the dev team with data design to an extent, too. From a pure DBA perspective, there's still an opportunity to work on indexing strategies in this scenario, but it takes good communication.

 

Nosing into this process will take some convincing if a shop's process is already pretty stable. It may even involve "volunteering" some time for the first couple projects, but I'm pretty confident that everyone will quickly see the benefits, both in quality of project outcome and the amount of time anyone is "waiting" on the data team.

 

I've had mixed feelings (and results) working this type of project, but with good, open communication, things can go alright. For readers who have been on these projects, how have they gone? Are data folks included directly as part of the Agile team? Has that helped make things run more smoothly?

IT pros now have the added responsibilities of having to know how to troubleshoot performance issues in apps and servers that are hosted remotely, in addition to monitoring and managing servers and apps that are hosted locally. This is where tools like Windows Remote Management (WinRM) come handy because it allows you to remotely manage, monitor, and troubleshoot applications and Windows server performance.

                   

WinRM is based on Web Services Management (WS-Management) which uses Simple Object Access Protocol (SOAP) requests to communicate with remote and local hosts, multi-vendor server hardware, operating systems, and applications. If you are predominately using a Windows environment, then WinRM will provide you remote management capabilities to do the following:

  • Communicate with remote hosts using a port that is always open by firewalls and client machines on a network.
  • Quickly start working in a cloud environment and remotely configure WinRM on EC2, Azure, etc. and monitor the performance of apps in such environments.
  • Ensure smoother execution and configuration for monitoring and managing apps and servers hosted remotely.

        

Configuring WinRM

For those who rely on PowerShell scripts to monitor applications running in remote hosts, you will first need to configure WinRM. But, this isn’t as easy as it sounds. This process is error prone, tedious, and time consuming, especially when you have a really large environment. In order to get started, you will need to enable Windows firewall on the server you want to configure WinRM. Here is a link to a blog that explains step-by-step how to configure WinRM on every computer or server. Key steps include:

WinRM.png

           

Alternative: Automate WinRM Configuration

Unfortunately, manual methods can take up too much of your time, especially if you have multiple apps and servers. With automated WinRM configuration, remotely executing PowerShell scripts can be achieved in minutes. SolarWinds Free Tool, Remote Execution Enabler for PowerShell, helps you configure WinRM on all your servers in a few minutes.

  • Configure WinRM on local and remote servers.
  • Bulk configuration across multiple hosts.
  • Automatically generate and distribute certificates for encrypted remote PowerShell execution.

          

Download the free tool here.

            

How do you manage your servers and apps that are hosted remotely? Is it using an automated platform, PowerShell scripts, or manual processes? Whatever the case, drop a line in the comments section.

In my previous blog, I discussed the difficulties of manual IP address management (IPAM). Manual management can result in poor visibility, inefficient operations, compromised security, and the inability to meet audit requirements for compliance. Many of the comments on the blog swayed towards shifting/using an automated solution. Here are 4 basic best practices for role delegation as an essential criteria for efficient IPAM.

 

Effective distribution of responsibility across, and within teams: Access control is an important consideration when multiple users have access to the IPAM system.

As IP administration operations touch several teams, it is recommended to:

  • Distribute tasks based on responsibilities and expertise of teams and individuals.
  • Securely delegate IP management tasks to different administrators without affecting current management practices.
  • Avoid bottlenecks, inefficiencies and errors while configuring different systems, accessing an available IP, or while making DHCP/DNS changes.

For example, the server team can delegate management of DNS/DHCP and IPAM functions to the network team while keeping control of the rest of the Windows server functionality. Network teams in turn can divide responsibilities based on the location or expertise within the group and delegate even simpler tasks, like new IP address assignments to the IT Helpdesk.


Different admins have unique role-based control: Role-based control helps ensure secure delegation of management tasks. Various role definitions permit different levels of access restrictions and also help track changes. This way you can maintain security without limiting the ability to delegate required IP management activities. Some examples of role-based control are:

  1. Administrator role or the Super User - full read/write access, initiate scans to all subnets, manage credentials for other roles, create custom fields, and full access to DHCP management and DNS monitoring.
  2. Power Users - varied permissions/access rights restricted to managing subnets and IP addresses only, management of supernet and group properties, and creation of custom data fields on portions of the network made available by the site administrator.
  3. Operator - access to the addition/deletion of IP address ranges and the ability to edit subnet status and IP address properties on the allowed portions of the network.
  4. Read Only Users - have only read access to DHCP servers, scopes, leases, reservations, and DNS servers, zones, records.
  5. Custom access - where the role is defined on a per subnet basis. DHCP and DNS access depends on the Global Account setting.
  6. Hide - Restrict all access to DHCP & DNS management.

Ultimately, control lies with the super user who can assign roles as per the needs and requirements of the network or organization.


Administering and inheriting rights: Setup and assignment of roles need to be easy and less time consuming. The effectiveness of an IPAM lies in the ease of management of the system itself. Many automated IPAM solutions are integrated with Windows Active Directory (commonly used in networks) making it easier to create and assign user roles for IPAM. Built-in role definitions help quickly assign and delegate IPAM tasks to different users.


Change approval or auditing: Compliance standards require that all changes made to the IP address pool be recorded and change history for IP addresses be maintained. Any change in the IP management structure of IP address, DHCP & DNS management must be logged separately, and maintained centrally.

A permissioned access system ensures that only approved/authorized personnel are allowed to make changes to IP address assignments. Ideally, an IP management system should allow administrative access to be delegated by subnet.

 

Maintaining a log for changes helps avoid errors and also simplifies the process of troubleshooting and rollback of unintended administrative changes. Automated IPAM solutions enable auditing by recording every change in the database along with the user name, date, time of modification, and details of the change. The audit details are published as reports and can be exported, emailed or retrieved as required for further analysis. Some examples of these reports are: Unused IP Addresses, Reserved-Static IP Addresses, IP Usage Summary, etc.


Conclusion

In conclusion, it’s quite clear that manual managing IP addresses can be a resource drain. On the other hand, investing in a good IPAM solution provides you with effective IPAM options. More importantly, tangible business benefits, including a compelling return on investment.


Do you agree that role delegation does help ease some load off the network administrator’s back?

Vegaskid

Death by social media

Posted by Vegaskid Jun 15, 2015

In my last article, The human factor, I discussed how you could have the most secure technology that currently exists in place and that could all amount to nothing if an attacker can persuade one of your users to do their bidding. In this article, I want to focus on a particular topic that fits in nicely, social media. There are apparently as many definitions of social media as there are people who use it but in the context of this article, I am referring to online services that people use to dynamically share information and media. Examples of such services include Facebook, Twitter, Instagram and YouTube.

 

The world has certainly changed a lot in the last 10 years when these kind of services really took off. There has been a massive culture shift from people sharing things via snail mail, to email, to social media. Most businesses have a presence across a number of social media sites as applicable and the vast majority of workers expect to be able to use them for personal use whilst at work. I could go on a rant here about the business risk caused by the lost productivity as social media addicts check in to their accounts every few minutes, but I don't want to be a party pooper. Instead, I will use my shield of security to justify why access to social media, especially from work computers but also on personal equipment on the office network if you have a BYOD policy, presents a risk to businesses that can be difficult to mitigate against.

 

Why? It goes back to the theme of my last post. People. There was a time when we seemed to be winning the battle against the bad guys. Most people (even my Dad!) knew not to be clicking on URLs sent in emails without going through a number of precursory checks. With the previously mentioned culture shift, we have now become so used to clicking on links that our friends and family post on social media that I doubt if the majority of people even stop to think about what they are just about to click on.

 

Consider that people who are active on social media are checking their various feeds throughout the day and you have a recipe for disaster just simmering away, ready to boil over. If you have a loose BYOD policy, or are one of those organisations that gives users local admin accounts (ya know, just to make it easier for them to do their jobs), or your training programme doesn't include social media, then you are opening yourself up to a massive risk.

 

I used to have a colleague many years ago who, having witnessed somebody at work send a trick URL to another colleague which got that person in hot water, told me "you are only ever one click away from being fired". That's a pretty harsh take, but perhaps "you are only ever one click away from data loss" might be a better message to share across your company.

 

As always, I'm really keen to hear your thoughts on the topic of today's post.

Information security is important to every organization, but when it comes to government agencies, security can be considered the priority. A breach or loss of information held by federal agencies can lead to major consequences that can even affect the national and economic security of the nation.


The Defense Information Systems Agency (DISA) is a combat support agency that provides support to the Department of Defense (DoD), including some of its most critical programs. In turn, this means that DISA must have the utmost highest possible security for networks and systems under its control. To achieve this, DISA developed Security Technical Implementation Guides (STIGs), which is a methodology for secure configuration and maintenance of IT systems, including network devices.  The DISA STIGs have been used by the Department of Defense (DoD) for IT security for many years.

 

In 2002, Congress felt civilian agencies weren’t making IT security a priority, so to help civilian agencies secure their IT systems, Congress created the Federal Information Security Management Act (FISMA). This act requires that each agency implement information security safeguards, audit them, as well make an accounting to the President’s Office of Management and Budget (OMB), who in turn prepares an annual compliance report for Congress.

 

FISMA standards and guidelines are developed by the National Institute of Standards and Technology (NIST). Under FISMA, every federal civilian agency is required to adopt a set of processes and policies to aid in securing data and ensure compliance.

 

Challenges and Consequences:

 

Federal agencies face numerous challenges when trying to achieve or maintain FISMA and DISA STIG compliance. For example, routinely examining configurations from hundreds of network devices and ensuring that they are configured in compliance with controls can be daunting, especially to agencies with small IT teams managing large networks. Challenges also arise from user errors too, such as employees inadvertently exposing critical configurations, not changing defaults, and employees having more privileges than required. Non-compliance can also have fatal consequencesnot just sanctions, but there is the weakening or threat to national security, disruption of crucial services used by citizens, and significant economic losses. There are multiple examples of agencies where non-compliance has resulted in critical consequences. For example, a cyber-espionage group named APT1 had compromised more than 100 companies across the world and stolen valuable data related to organizations. Some of this information includes: business plans, agendas and minutes from meetings involving high-ranking officials, manufacturing procedures, e-mails as well as user-credentials and network architecture information.

 

Solution:

 

With all that said, NIST FISMA and DISA STIGs compliance for your network can be achieved through three simple steps.

 

1. Categorize Information Systems:

An inventory of all devices in the network should be created and then devices must be assessed to check whether they’re in compliance or not. You should also bring non-compliant devices to a complaint baseline configuration and document the policies applied.

 

2. Assess Policy Effectiveness:

Devices should continuously be monitored and tracked to ensure that security policies are followed and enforced at all times. Regular audits using configuration management tools should be used to assess policy violations. Further, using penetration testing can help evaluate the effectiveness of the policies enforced.

 

3. Remediate Risks and Violations:

After all security risks or policy violations are listed, apply a baseline configuration that meets recommended policies or close each open risk after it has been reviewed and approved. Once again, the use of a tool to automate review and approval can speed the process of remediation.

 

In addition to following these steps, using a tool for continuous monitoring of network devices for configuration changes and change management adds to security and helps achieve compliance.

 

If you are ready to start with your NIST FISMA or DISA STIG implementation and need an even deeper understanding on how to achieve compliance, as well as how to automate these processes with continuous monitoring, download the following SolarWinds white paper: “Compliance & Continuous Cybersecurity Monitoring.

 

But for those of you who would like to test a tool before deploying it into the production network, SolarWinds Network Configuration Manager is a configuration and change management tool that can integrate with GNS3, a network simulator. For integration information, refer to this integration guide here for details:

https://community.gns3.com/docs/DOC-1903

 

Happy monitoring!

Filter Blog

By date:
By tag: