1 2 3 Previous Next

Geek Speak

1,572 posts
scuff

What Is Your Why?

Posted by scuff Jun 29, 2015

Browsing the discussions and resources here in the Thwack community, I can see that you are an incredible bunch of people - passionate and knowledgeable about your own areas of expertise & eager to guide & advise others.

 

But, for a moment, let’s look up from what we are doing. Let’s talk about your Why.

I'm sure you all know what you do. That would be easy to tell someone, right? And you'd could probably also go into great detail about how you do it. But do you have a real vision for why you do what you do? Do you have a vision that extends beyond "because it's my job?" or "because the servers would fall over if I didn't?"

 

This isn't a deeply technical topic, but it can become your secret weapon. Your Why can lift you during a frustrating day. It can get you through a 1am call out (as well as all the caffeine). And it will be music to the ears of management or the business when you come to them with a problem or a recommendation. If you can frame your conversations with them to address their Why, I can guarantee you better success.

 

“I'm a sys admin. I monitor and maintain our servers. I do this to keep the business running. The end users can then serve their customers.” Easy so far, right?

So what does your company do for its customers, who are ultimately your customers? Is the product or service vision of your company as deeply ingrained in your mind as it is in the marketing department?

Does your company give peace of mind to people and help them in the toughest times … i.e. are you an insurance company?

 

So really, you give people peace of mind & help them in the toughest times by ensuring that your staff have fast access to their computer systems when they need them.

 

Tell me what your Why is? How do you think your Why can influence how you work & what decisions you make?

 

Simon Sinek explains Apple’s Why: https://www.youtube.com/watch?v=sioZd3AxmnE

 

P.S. If you are in this for the thrill of working with technology, I get that too.

As a child of the 80's, there are particular songs (like "Danger Zone", "Shout", "No Sleep Till Brooklyn") that really bring me back to the essence of that time period (RIP mullet). There are even times where current events take me back to a specific song. Take today’s "storage market" and all the technologies that are being discussed. Looking at this has me going back to the classic Public Enemy song, "Don't believe the hype…” There is so much "hype" in regard to different technologies that it can be overwhelming to keep up, let alone make a decision on what is right for your company. You also have to manage the pressures of business needs, storage performance needs, data protection, data growth, and resource constraints to just name a few. I might come off pro old-school IT, but I’m not. Ask yourself some the questions below, and make sure the promise of these new trends makes sense for your business before you jump on the bandwagon.

 

Hyper-convergence

 

Hyper-convergence is a software infrastructure solution that combines compute, storage, networking, and virtualization resources in commodity boxes. The promise is integrated technologies that provide a single view for the administrator. This makes it easier for users to deploy, manage, grow, and support their environment because everything is tied together. This is great for quite a few environments, but is it great for "your" environment?  What do your VM workloads look like? Do they all have similar resource requirements or are some VMs more demanding than others? Does your resource needs (CPU, memory, storage, etc...) grow evenly or are some resources growing faster than others? 

 

If you’re considering a hyper-converged solution, check out this whitepaper: The Hyper-Convergence Effect: Do Management Requirements Change?

 

Solid State Drives

 

Solid state drives have been around for decades, but over the last few years have really grown with new technology advances (PCIe/NVMe) and the cost of flash has come down dramatically. The promise of SSD is higher performance, better durability, better cooling, and denser form factors.  This has led to claims that hard drives are dead and SSD (flash) is all that is needed in your data center. Is this right for "your" environment? Do you have a need for high performance across your entire environment?  What is your capacity growth and how does it compare to performance growth? Will your applications take advantage of SSDs? Do you have the budget for flash storage across all applications?

 

If you are considering making a move to Solid State drives, check out this SolarWinds Whitepaper: How do I know my SQL and Virtual Environments are ready for SSD?

 

Cloud Storage

 

For years people have been talking about "the cloud" and whether it’s private or public. For this we will talk public clouds. Over the last couple of years we have seen more businesses adopt cloud into their data storage environment. The promise is allowing companies to access their data anywhere, freeing up IT resources, providing scalability to grow your business, and reducing IT costs to name a few. This has led to the claims of everything going to the cloud and that keeping storage "on premise" is not ideal anymore. For many companies, this might be ideal, but is it ideal for "your" environment? What happens if there is an "outage," whether through the cloud provider or your connection to the cloud? Do you have the bandwidth to support your user’s access from an external location? What cloud provider are you using and are you locked in to that provider? How will you manage your data security and protect against attacks to that data?

 

These are just a few of the "storage technologies" that are currently being "hyped" in the market and each of them have a place in almost all data centers.  However, just because a new technology solves certain data center problems, does not mean it will solve "your" problems. Understanding your problems and where you want to take your business is the best way to be able to move past the "hype" of a new technology and really see the value that it will provide.

 

Now, what do you think? Is there too much "hype" in the storage market? What storage technology do you think is being over "hyped"?       

A couple of weeks ago, I wrote about how I love documentation and how useful it can be in the future, for you or someone else. I also lamented slightly about how it is hard on most projects to get the time to write good reference documentation. This week, I'm going to discuss one possible way to knock out some of the more time-consuming parts.

 

The key word is: Design. That is--the "Design" phase of the project. In most projects, regardless of the flavor of SDLC utilized, there is a block of time to be utilized for designing the solution. Since this is work that will be happening anyway, getting some good documentation out of it is simply a matter of writing down what gets discussed, decisions that get made (along with the "why"s, of course), and how the solution(s) will be structured. Chances are these things get written down, anyway, but outside the mindset of their possible use as future reference material. Usually, by its nature, designing a project will produce some useful artifacts; things like high-level architecture diagrams or possibly an ERD or two. If it's a data-integration or BI project, and one is into details, source-to-target mappings are likely generated at this point.

 

All of these items add up to a decent set of notes for the future, explaining the solution and where it came from. This way, even if no dedicated time can be worked into a project for documentation, useful items can be produced.

 

I think there's another benefit to this course of action. I have a phrase I use to describe my philosophy on this topic: I don't like to think while I'm writing ETL. This sounds kind of bad on the surface, but what I really mean is this: When it comes time to actually sling some code, I want to be able to concentrate on the mechanical details; Things like doing everything right, following best practices, security concerns, and making things as highly-performing as they can be, all right from the get-go. With the data flows already drawn out and the business rules documented and agreed upon, it is easy to focus on those things.

 

We had a fairly nice conversation about it being hard to get the time to write documentation when I wrote about it before. Would beefing up the efforts during the design phase help? Are ya'll even able to get good time for the design stage in the first place (I've been in predicaments where that gets axed, too)?

Vegaskid

Will the arms race ever end?

Posted by Vegaskid Jun 22, 2015

In my fourth post in my tenure as Thwack ambassador for June, I thought I would talk about what appears to be the never ending battle between good and bad. If I can get to the end without mentioning 'cyber' more than that single reference and various different coloured hats, then my work here will be done. The purpose of this post is to hopefully spark some discussion around the topic and I would love to hear what your take is.

 

Attacks on computer systems are nothing new. The web is full of stories of viruses that seem to go back further and further in time, the more you look. The first I am aware of is the creeper virus, which was realised on ARPANET way back in 1971, before even this oldie was born. Over forty years later and the anti virus vendors have still failed to deliver adequate protection against viruses, trojans, worms and other similar bad things that I will bundle together under the label of malware. The problem doesn't just stop at the deliberately malicious code. Software bugs, interoperability issues between different systems, 'helpful' developer back doors. It seems that no longer has one attack vector been patched up, than another 100 fill its place. Technology has for the longest time been used by both sides to get a leg up on the other.

 

The fact that technology and our pace of life is advancing at an ever increasing rate means that this cycle is getting ever more frequent. Personally, I feel that this is one of the key reasons why it will never end. That sounds a bit depressing but I am a realist at heart (often confused by the rose tinted spectacle wearing brigade as a sceptic) so I strongly believe that if you follow a number of best practices, some of which I highlighted in my first post (Defence in depth), keep up to date with relevant industry news and events and have a good internal culture including all staff being bought in, good documentation/processes and buy-in from the top down and we work together as a mature community, we give ourselves a better chance of being protected. It's not unreasonable to state that the majority of drive-by attackers will give up and move on if you present a big enough obstacle to penetrate. If you don't offer any real defences though, thinking all is all lost, you will almost certainly experience that as a self-fulfilling prophecy.

 

Let me know what your thoughts are on my scribbles above and what you think the battlefield will look like in 20 year's time.

"Shadow IT” refers to the IT systems, solutions, and services used by employees in an organization without the approval, knowledge, and support of the IT department. It is also referred to as “Stealth IT.” In its widely known usage, Shadow IT is a negative term and is mostly condemned by IT teams as these solutions are NOT in line with the organization's requirements for control, documentation, security, and compliance. Given that this increases the likelihood of unofficial and uncontrolled data flows, it makes it more difficult to comply with SOX, PCI DSS, FISMA, HIPAA, and many other regulatory compliance standards.

hidden-shadow.jpg

  

The growth of shadow IT in recent years can be attributed to the increasing consumerization of technology, cloud computing services, and freeware services online that are easy to acquire and deploy without going through the corporate IT department.

  • Usage of Dropbox and other hosted services for storing and exchanging corporate information can be shadow IT.
  • Installation and usage of non-IT-approved software on company-provided devices is also shadow IT. Whether it is installing a photo editing tool, music player, or a pastime game, if your IT regulations are against them, they can also be shadow IT.
  • BYOD, not in accordance with the IT policy, can contribute to shadow IT as IT teams have no way of finding out and protecting corporate data stored on personal devices.
  • Even usage of USB drives or CDs to copy corporate data from corporate devices can be considered shadow IT, if the company’s IT policy has mandated against it.

 

CHALLENGES & ADVERSE IMPACT OF SHADOW IT

The foremost challenge is upholding security and data integrity. We can risk exposure of sensitive data to sources outside the network firewall, and also risk letting malicious programs and malware into the network causing security breaches. Some companies take this very seriously and stipulate strict IT regulations which require IT administrator’s access to install new software on employee workstations. Some websites can also be blocked when on the corporate network if there are chances of employees exposing data thereat. These could be social media, hosted online services, personal email, etc.

 

There have been various instances of compliance violations and financial penalties for companies that have had their customer information hacked due to the presence of intrusive malware in an employee’s system, leading to massive data breaches. Should we even start talking about the data breaches on the cloud? It'll be an endless story.

 

Additionally, shadow IT sets the stage for asset management and software licensing issues. It becomes an onus on the IT department to constantly scan for non-IT-approved software and services being used by employees, and remove them according to policy.

 

SHOULD SHADOW IT ALWAYS REMAIN A TABOO?

This is a debatable question because there are instances where shadow IT can be useful to employees. If IT policies and new software procurement procedures are too bureaucratic and time-consuming and employees can get the job done quickly by resorting to use free tools available online, then—from a business perspective—why not? There are also arguments that, when implemented properly, shadow IT can spur innovation. Organizations can find faster and more productive means of doing work with newer and cheaper technologies.

 

What is your take on shadow IT? No doubt it comes with more bane than boon. How does your organization deal it?

Fresh out of high school, I got a job working in a large bank.  My favorite task was inputting sales figures into a DOS based system and watching it crash when I tried to print reports. I extended my high school computing knowledge by working over the phone with second level support. I confessed that I could understand what they were doing, but I wouldn’t know where to start.

 

They saw some raw potential in me and invited me onto an IT project to roll out new computers to the bank branches nationwide. I was to learn everything on the job: DOS commands, device drivers, Windows NT architecture, TCP/IP addressing, token ring networks, SMTP communications and LDAP architecture. My first IT task was cloning computers off an image on a portable backpack, when the registry didn’t exist & Windows was just a file copy away.

 

Very little was done with a GUI. The only wizard was Windows Installer. When you learn about IT from the ground up and you understand the concepts, you can then troubleshoot. When things didn’t work (and you couldn’t just google the error) I could apply my knowledge of how it should be working, know where to start with some resolution steps and ‘trial and error’ my way out of it. Now, that knowledge means I can cut through the noise of thousands of search results and apply ‘related but not quite the same’ scenarios until I find a resolution.

 

More than one person has commented to me that this generation of teenagers doesn't have the same tech-savvy skills as the previous one, because they are used to consumer IT devices that generally just work. So, in a world where technology is more prevalent than ever, do we get back to basics enough? Do we teach the mechanics of it all to those new to the industry or to someone looking for a solution? Or do we just point them to the fix?

Data people and developers don't really get along very well. There have been lots of reasons for this historically, and, I'm guessing, as time goes on, there will continue to be conflict between these two groups of people. Sometimes these disagreements are petty; others are more fundamental. One of the areas I've seen cause the most strife is shared project work using an Agile software development lifecycle. I know talking about Agile methodologies and data-related projects/items in the same sentence is a recipe for a serious religious battle, but here I want to keep the conversation to a specific couple of items.

 

The first of these two items is what happens when an application is developed using an ORM and a language that allow the dev team to not focus on the database or its design. Instead, the engineer(s) only need to write code and allow the ORM to design and build the database schema underneath. (Although this has been around for longer than Agile processes have been, I've seen a lot more of it on Agile projects.) This can lead to disconnects for a Development DBA-type person tasked with ensuring good database performance for the new application or for a Business Intelligence developer extracting data to supply to a Data Mart and/or Warehouse.

 

Kind of by its nature, this use of an ORM means the data model might not be done until a lot of the application itself is developed…and this might be pretty late in the game. In ETL Land, development can't really start until actual columns and tables exist. Furthermore, if anything changes later on, it can be a lot of effort to make changes in ETL processes. For a DBA that is interested in performance-tuning new data objects/elements, there may not be much here to do--the model is defined in code and there isn't an abstraction layer that can "protect" the application from changes the DBA may want to make to improve performance.

 

The other problem affects Business Intelligence projects a little more specifically. In my experience, it's easy for answers to "why" questions that have already been asked to get lost in the documentation of User Stories and their associated Acceptance Criteria. Addressing "why" data elements are defined the way they are is super-important to designing useful BI solutions. Of course, the BI developer is going to want/need to talk to the SMEs directly, but there isn't always time for this allotted during an Agile project's schedule.

 

I've found the best way to handle all this is focusing on an old problem in IT and one of the fundamental tenants of the Agile method: Communication. I'll also follow that up with a close second-place: Teamwork. Of course, these things should be going on from Day 1 with any project…but they are especially important if either item discussed above are trying to cause major problems on a project. As data people, we should work with the development team (and the Business Analysts, if applicable) from the get-go, participating in early business-y discussions so we can get all of the backstory. We can help the dev team with data design to an extent, too. From a pure DBA perspective, there's still an opportunity to work on indexing strategies in this scenario, but it takes good communication.

 

Nosing into this process will take some convincing if a shop's process is already pretty stable. It may even involve "volunteering" some time for the first couple projects, but I'm pretty confident that everyone will quickly see the benefits, both in quality of project outcome and the amount of time anyone is "waiting" on the data team.

 

I've had mixed feelings (and results) working this type of project, but with good, open communication, things can go alright. For readers who have been on these projects, how have they gone? Are data folks included directly as part of the Agile team? Has that helped make things run more smoothly?

IT pros now have the added responsibilities of having to know how to troubleshoot performance issues in apps and servers that are hosted remotely, in addition to monitoring and managing servers and apps that are hosted locally. This is where tools like Windows Remote Management (WinRM) come handy because it allows you to remotely manage, monitor, and troubleshoot applications and Windows server performance.

                   

WinRM is based on Web Services Management (WS-Management) which uses Simple Object Access Protocol (SOAP) requests to communicate with remote and local hosts, multi-vendor server hardware, operating systems, and applications. If you are predominately using a Windows environment, then WinRM will provide you remote management capabilities to do the following:

  • Communicate with remote hosts using a port that is always open by firewalls and client machines on a network.
  • Quickly start working in a cloud environment and remotely configure WinRM on EC2, Azure, etc. and monitor the performance of apps in such environments.
  • Ensure smoother execution and configuration for monitoring and managing apps and servers hosted remotely.

        

Configuring WinRM

For those who rely on PowerShell scripts to monitor applications running in remote hosts, you will first need to configure WinRM. But, this isn’t as easy as it sounds. This process is error prone, tedious, and time consuming, especially when you have a really large environment. In order to get started, you will need to enable Windows firewall on the server you want to configure WinRM. Here is a link to a blog that explains step-by-step how to configure WinRM on every computer or server. Key steps include:

WinRM.png

           

Alternative: Automate WinRM Configuration

Unfortunately, manual methods can take up too much of your time, especially if you have multiple apps and servers. With automated WinRM configuration, remotely executing PowerShell scripts can be achieved in minutes. SolarWinds Free Tool, Remote Execution Enabler for PowerShell, helps you configure WinRM on all your servers in a few minutes.

  • Configure WinRM on local and remote servers.
  • Bulk configuration across multiple hosts.
  • Automatically generate and distribute certificates for encrypted remote PowerShell execution.

          

Download the free tool here.

            

How do you manage your servers and apps that are hosted remotely? Is it using an automated platform, PowerShell scripts, or manual processes? Whatever the case, drop a line in the comments section.

In my previous blog, I discussed the difficulties of manual IP address management (IPAM). Manual management can result in poor visibility, inefficient operations, compromised security, and the inability to meet audit requirements for compliance. Many of the comments on the blog swayed towards shifting/using an automated solution. Here are 4 basic best practices for role delegation as an essential criteria for efficient IPAM.

 

Effective distribution of responsibility across, and within teams: Access control is an important consideration when multiple users have access to the IPAM system.

As IP administration operations touch several teams, it is recommended to:

  • Distribute tasks based on responsibilities and expertise of teams and individuals.
  • Securely delegate IP management tasks to different administrators without affecting current management practices.
  • Avoid bottlenecks, inefficiencies and errors while configuring different systems, accessing an available IP, or while making DHCP/DNS changes.

For example, the server team can delegate management of DNS/DHCP and IPAM functions to the network team while keeping control of the rest of the Windows server functionality. Network teams in turn can divide responsibilities based on the location or expertise within the group and delegate even simpler tasks, like new IP address assignments to the IT Helpdesk.


Different admins have unique role-based control: Role-based control helps ensure secure delegation of management tasks. Various role definitions permit different levels of access restrictions and also help track changes. This way you can maintain security without limiting the ability to delegate required IP management activities. Some examples of role-based control are:

  1. Administrator role or the Super User - full read/write access, initiate scans to all subnets, manage credentials for other roles, create custom fields, and full access to DHCP management and DNS monitoring.
  2. Power Users - varied permissions/access rights restricted to managing subnets and IP addresses only, management of supernet and group properties, and creation of custom data fields on portions of the network made available by the site administrator.
  3. Operator - access to the addition/deletion of IP address ranges and the ability to edit subnet status and IP address properties on the allowed portions of the network.
  4. Read Only Users - have only read access to DHCP servers, scopes, leases, reservations, and DNS servers, zones, records.
  5. Custom access - where the role is defined on a per subnet basis. DHCP and DNS access depends on the Global Account setting.
  6. Hide - Restrict all access to DHCP & DNS management.

Ultimately, control lies with the super user who can assign roles as per the needs and requirements of the network or organization.


Administering and inheriting rights: Setup and assignment of roles need to be easy and less time consuming. The effectiveness of an IPAM lies in the ease of management of the system itself. Many automated IPAM solutions are integrated with Windows Active Directory (commonly used in networks) making it easier to create and assign user roles for IPAM. Built-in role definitions help quickly assign and delegate IPAM tasks to different users.


Change approval or auditing: Compliance standards require that all changes made to the IP address pool be recorded and change history for IP addresses be maintained. Any change in the IP management structure of IP address, DHCP & DNS management must be logged separately, and maintained centrally.

A permissioned access system ensures that only approved/authorized personnel are allowed to make changes to IP address assignments. Ideally, an IP management system should allow administrative access to be delegated by subnet.

 

Maintaining a log for changes helps avoid errors and also simplifies the process of troubleshooting and rollback of unintended administrative changes. Automated IPAM solutions enable auditing by recording every change in the database along with the user name, date, time of modification, and details of the change. The audit details are published as reports and can be exported, emailed or retrieved as required for further analysis. Some examples of these reports are: Unused IP Addresses, Reserved-Static IP Addresses, IP Usage Summary, etc.


Conclusion

In conclusion, it’s quite clear that manual managing IP addresses can be a resource drain. On the other hand, investing in a good IPAM solution provides you with effective IPAM options. More importantly, tangible business benefits, including a compelling return on investment.


Do you agree that role delegation does help ease some load off the network administrator’s back?

Vegaskid

Death by social media

Posted by Vegaskid Jun 15, 2015

In my last article, The human factor, I discussed how you could have the most secure technology that currently exists in place and that could all amount to nothing if an attacker can persuade one of your users to do their bidding. In this article, I want to focus on a particular topic that fits in nicely, social media. There are apparently as many definitions of social media as there are people who use it but in the context of this article, I am referring to online services that people use to dynamically share information and media. Examples of such services include Facebook, Twitter, Instagram and YouTube.

 

The world has certainly changed a lot in the last 10 years when these kind of services really took off. There has been a massive culture shift from people sharing things via snail mail, to email, to social media. Most businesses have a presence across a number of social media sites as applicable and the vast majority of workers expect to be able to use them for personal use whilst at work. I could go on a rant here about the business risk caused by the lost productivity as social media addicts check in to their accounts every few minutes, but I don't want to be a party pooper. Instead, I will use my shield of security to justify why access to social media, especially from work computers but also on personal equipment on the office network if you have a BYOD policy, presents a risk to businesses that can be difficult to mitigate against.

 

Why? It goes back to the theme of my last post. People. There was a time when we seemed to be winning the battle against the bad guys. Most people (even my Dad!) knew not to be clicking on URLs sent in emails without going through a number of precursory checks. With the previously mentioned culture shift, we have now become so used to clicking on links that our friends and family post on social media that I doubt if the majority of people even stop to think about what they are just about to click on.

 

Consider that people who are active on social media are checking their various feeds throughout the day and you have a recipe for disaster just simmering away, ready to boil over. If you have a loose BYOD policy, or are one of those organisations that gives users local admin accounts (ya know, just to make it easier for them to do their jobs), or your training programme doesn't include social media, then you are opening yourself up to a massive risk.

 

I used to have a colleague many years ago who, having witnessed somebody at work send a trick URL to another colleague which got that person in hot water, told me "you are only ever one click away from being fired". That's a pretty harsh take, but perhaps "you are only ever one click away from data loss" might be a better message to share across your company.

 

As always, I'm really keen to hear your thoughts on the topic of today's post.

Information security is important to every organization, but when it comes to government agencies, security can be considered the priority. A breach or loss of information held by federal agencies can lead to major consequences that can even affect the national and economic security of the nation.


The Defense Information Systems Agency (DISA) is a combat support agency that provides support to the Department of Defense (DoD), including some of its most critical programs. In turn, this means that DISA must have the utmost highest possible security for networks and systems under its control. To achieve this, DISA developed Security Technical Implementation Guides (STIGs), which is a methodology for secure configuration and maintenance of IT systems, including network devices.  The DISA STIGs have been used by the Department of Defense (DoD) for IT security for many years.

 

In 2002, Congress felt civilian agencies weren’t making IT security a priority, so to help civilian agencies secure their IT systems, Congress created the Federal Information Security Management Act (FISMA). This act requires that each agency implement information security safeguards, audit them, as well make an accounting to the President’s Office of Management and Budget (OMB), who in turn prepares an annual compliance report for Congress.

 

FISMA standards and guidelines are developed by the National Institute of Standards and Technology (NIST). Under FISMA, every federal civilian agency is required to adopt a set of processes and policies to aid in securing data and ensure compliance.

 

Challenges and Consequences:

 

Federal agencies face numerous challenges when trying to achieve or maintain FISMA and DISA STIG compliance. For example, routinely examining configurations from hundreds of network devices and ensuring that they are configured in compliance with controls can be daunting, especially to agencies with small IT teams managing large networks. Challenges also arise from user errors too, such as employees inadvertently exposing critical configurations, not changing defaults, and employees having more privileges than required. Non-compliance can also have fatal consequencesnot just sanctions, but there is the weakening or threat to national security, disruption of crucial services used by citizens, and significant economic losses. There are multiple examples of agencies where non-compliance has resulted in critical consequences. For example, a cyber-espionage group named APT1 had compromised more than 100 companies across the world and stolen valuable data related to organizations. Some of this information includes: business plans, agendas and minutes from meetings involving high-ranking officials, manufacturing procedures, e-mails as well as user-credentials and network architecture information.

 

Solution:

 

With all that said, NIST FISMA and DISA STIGs compliance for your network can be achieved through three simple steps.

 

1. Categorize Information Systems:

An inventory of all devices in the network should be created and then devices must be assessed to check whether they’re in compliance or not. You should also bring non-compliant devices to a complaint baseline configuration and document the policies applied.

 

2. Assess Policy Effectiveness:

Devices should continuously be monitored and tracked to ensure that security policies are followed and enforced at all times. Regular audits using configuration management tools should be used to assess policy violations. Further, using penetration testing can help evaluate the effectiveness of the policies enforced.

 

3. Remediate Risks and Violations:

After all security risks or policy violations are listed, apply a baseline configuration that meets recommended policies or close each open risk after it has been reviewed and approved. Once again, the use of a tool to automate review and approval can speed the process of remediation.

 

In addition to following these steps, using a tool for continuous monitoring of network devices for configuration changes and change management adds to security and helps achieve compliance.

 

If you are ready to start with your NIST FISMA or DISA STIG implementation and need an even deeper understanding on how to achieve compliance, as well as how to automate these processes with continuous monitoring, download the following SolarWinds white paper: “Compliance & Continuous Cybersecurity Monitoring.

 

But for those of you who would like to test a tool before deploying it into the production network, SolarWinds Network Configuration Manager is a configuration and change management tool that can integrate with GNS3, a network simulator. For integration information, refer to this integration guide here for details:

https://community.gns3.com/docs/DOC-1903

 

Happy monitoring!

A few months ago, SolarWinds asked me to evaluate their IPAM product and to see how it compares to Microsoft’s IPAM solution that is built into Windows Server 2012 and Windows Server 2012 R2. In doing so, I constructed a multi-forest environment and worked through a feature by feature comparison of the two tools.

 

Obviously any third party product should provide functionality beyond that of the built in management tools. Otherwise what’s the point of using a third party management tool? That being the case, I’m not going to bother talking about all of the great functionality that exists within SolarWinds IPAM. I’m sure that the SolarWinds marketing staff could do a better job of that than I ever could.

 

What I do want to talk about is a simple question that someone asked me while I was attending Microsoft Ignite. That person said that they had heard that Microsoft IPAM really didn’t work very well and wondered if they were better off continuing to use spreadsheets for IP address management.

 

Here is my honest opinion on the matter:

 

Microsoft IPAM works, but it has a lot of limitations. For instance, a single Microsoft IPAM instance can’t manage multiple Active Directory forests and Microsoft IPAM does not work with non-Microsoft products. I think that using a third party product such as SolarWinds IPAM is clearly the best option, but if a third party management tool isn’t in the budget and you can live with Microsoft IPAM’s limitations then yes, it will work.

 

Having said that, there are two more things that you need to know if you are considering using Microsoft IPAM. First, even though it is relatively easy to set up Microsoft IPAM, it can be really tricky to bring your DNS and DHCP servers under management. The process is anything but intuitive and often requires some troubleshooting along the way. In fact, I recently wrote a rather lengthy article for Redmond Magazine on how to troubleshoot this process (this article will be published soon).

 

The second thing that you need to know is that there is a bit of a learning curve associated with using the Microsoft IPAM console. There are times when you may need to refresh the console without being told to do so. Similarly, there are some tasks that must be performed through the DNS or DHCP management consoles. It takes some time to learn all of the console’s various nuances and you may find that a third party tool makes the management process easier and more efficient.

 

Many of us use Microsoft Server in a variety of roles, including DHCP and DNS.  Microsoft DHCP and DNS services are provided at no extra cost and do a good job.  Besides, the Active Directory service depends on the use of DNS, so many organizations choose to use the Microsoft DNS services if for no other reason than to support the Active Directory.

 

Many of us (especially those still using spreadsheets to track IP’s) were excited to see a new IP Address Management utility bundled with Windows Server 2012.  However, Microsoft IPAM is somewhat comparable to many of Windows Server’s other built-in management tools.  That is to say that these tools are functional, but are not necessarily elegant and often suffer from a number of limitations. In fact, the Microsoft IPAM tool has many of the same limitations as the built-in DNS and DHCP management tools.

 

For example, Microsoft IPAM works great within a single AD forest, but it won’t work across forest boundaries. Organizations that have multiple Active Directory forests can deploy multiple IPAM instances, but these instances are not aware of one another and do not provide any sort of data synchronization.

 

The Microsoft IPAM console does a decent job of managing Microsoft DHCP servers, but is not designed to replace the DHCP MMC. Unfortunately, the Microsoft IPAM console provides very little DNS functionality, although this is expected to improve over time.

 

So why does any of this matter? Because your time is worth something.  If the tool is free, but it doesn’t save you time then the solution is not cost effective. This is why some organizations eventually graduate from Microsoft bundled tools to third-party solutions. 

 

What is your point of view?  Do you use Microsoft IPAM?  Why or why not?  Have you graduated?  If so, to what and why? If you would like to learn more about how Microsoft IPAM may cost you more, view this recorded webinar.

 

As SolarWinds’ Head Geek, and a somewhat self-respecting IT professional, I try to retain at least some decorum when I post to Geek Speak. But this year I’m bursting at the seams with anticipation for what we have in store for you in this year’s thwackCamp! (Hyperbolic much?)

 

You, the thwack community, have really embraced thwackCamp- most recently our thwackCamp Holiday thwackTacular 2014 and attendance has substantially grown again.  As a result this year we’re offering two tracks, a real Keynote featuring Nikki and Joel, as well as an DevOps Experts Panel with Caroline McCrory, Dave McCrory, Michael Coté and Matthew Ray.

 

Between those we’ll have several deep-dive, best practice and how-to sessions on a number of topics regarding both SolarWinds products and IT management in general.  Again this year all topics are coming from you, the amazing IT professional members of thwack.  We think you’ll really like them.

 

Of course all the thwack camp favorites are back again this year, including lots of giveaways, multiple live chat lobbies with our presenters, and the Head Geeks. I’ll even be doing live member shout-outs and taking some tricky questions on-air.

 

Let’s do this

 

First, make sure you have a thwack account!  The Keynote and Experts Panel will be open to all, but the track sessions will only be open to members of the thwack community.

 

Next, head over to the thwackCamp event page and check out the schedule.  It’s at http://thwackcamp.com.  See?  It has its own URL now- we’re not messing around this year. (Ok, truthfully you can browse the event catalog even without a login, so technically you can defer step one until you’re ready.)  You may also register for the individual sessions by clicking on them to view their details.  That will make it easy to manage your schedule in the thwack calendar.

 

Last, be social.  We’ll be keeping an eye out for tweets with the #thwackCamp hashtag, and we’ll retweet great ones.  Or in my case really geeky ones.  Seriously, I just tweeted a pic from Cisco Live of the 7004 core routers in the Cisco NOC with the caption “A little Nexus never hurt anybody”. Get it? It’s the smallest N7000 chassis? Ugh.  I’m pretty sure you can do better.

 

 

So get ready, get registered, and then have fun at thwackCamp again this year!


I have a confession: I like documentation.

 

I know.

 

But I have good reasons. Well; some reasons are better than others. I like to type (not really a good reason); I like to write (although I'd argue I'm not all that great at it); and I like to have good references for projects I've done that will hopefully be useful to any future person--or myself--who has to support what I've put together (OK, this is actually a good one).

 

When talking about data projects or DBA-related work, reference documentation can contain a lot of nitty-gritty information. There are two specific areas that I think are most important, however. The first of these is useful in the case of Business Intelligence projects, and usually takes the form of a combination Data Dictionary/Source-to-Target Mapping listing. The other is in the vein of my post from last week wherein I discussed asking the question of "why" during the early stages of a project. Even having only these "why" question and answers written down can and will go a long way towards painting the picture of a project for the future person needing to support our masterpieces.

 

As good as it truly is for all people involved in a project, writing reference documentation is something that isn't necessarily easy or a lot of fun to do. One of the most frustrating things about it is that in a lot of instances, time to write this documentation isn't included in the cost or schedule of the project; especially, in my experience, if it's following something like the Scrum methodology. In the case of a "day job", spending time on documentation may not have the same immediate need, as the people who participated on the project theoretically aren't going anywhere. Furthermore, people spending that time takes time away from other work they could be doing. In the case of an outside consultancy, paying an outside firm to sit around and write documentation has an obvious effect on the project's bottom line.

 

If I had to guess, I'd say most of you readers out there aren't as into documentation as much as I am…I really do understand. But at the same time, what do you do for the future? Do you rely on comments in code or something like Extended Properties on database objects?

And, for those of you that do attempt to provide documentation… do you have to make special efforts to allow the time to create it? Any secrets to winning that argument? ;-)

Filter Blog

By date:
By tag: