What Is Your Why?

Posted by scuff Jun 29, 2015

Browsing the discussions and resources here in the Thwack community, I can see that you are an incredible bunch of people - passionate and knowledgeable about your own areas of expertise & eager to guide & advise others.


But, for a moment, let’s look up from what we are doing. Let’s talk about your Why.

I'm sure you all know what you do. That would be easy to tell someone, right? And you'd could probably also go into great detail about how you do it. But do you have a real vision for why you do what you do? Do you have a vision that extends beyond "because it's my job?" or "because the servers would fall over if I didn't?"


This isn't a deeply technical topic, but it can become your secret weapon. Your Why can lift you during a frustrating day. It can get you through a 1am call out (as well as all the caffeine). And it will be music to the ears of management or the business when you come to them with a problem or a recommendation. If you can frame your conversations with them to address their Why, I can guarantee you better success.


“I'm a sys admin. I monitor and maintain our servers. I do this to keep the business running. The end users can then serve their customers.” Easy so far, right?

So what does your company do for its customers, who are ultimately your customers? Is the product or service vision of your company as deeply ingrained in your mind as it is in the marketing department?

Does your company give peace of mind to people and help them in the toughest times … i.e. are you an insurance company?


So really, you give people peace of mind & help them in the toughest times by ensuring that your staff have fast access to their computer systems when they need them.


Tell me what your Why is? How do you think your Why can influence how you work & what decisions you make?


Simon Sinek explains Apple’s Why:


P.S. If you are in this for the thrill of working with technology, I get that too.

As a child of the 80's, there are particular songs (like "Danger Zone", "Shout", "No Sleep Till Brooklyn") that really bring me back to the essence of that time period (RIP mullet). There are even times where current events take me back to a specific song. Take today’s "storage market" and all the technologies that are being discussed. Looking at this has me going back to the classic Public Enemy song, "Don't believe the hype…” There is so much "hype" in regard to different technologies that it can be overwhelming to keep up, let alone make a decision on what is right for your company. You also have to manage the pressures of business needs, storage performance needs, data protection, data growth, and resource constraints to just name a few. I might come off pro old-school IT, but I’m not. Ask yourself some of the questions below, and make sure the promise of these new trends makes sense for your business before you jump on the bandwagon.




Hyper-convergence is a software infrastructure solution that combines compute, storage, networking, and virtualization resources in commodity boxes. The promise is integrated technologies that provide a single view for the administrator. This makes it easier for users to deploy, manage, grow, and support their environment because everything is tied together. This is great for quite a few environments, but is it great for "your" environment?  What do your VM workloads look like? Do they all have similar resource requirements or are some VMs more demanding than others? Does your resource needs (CPU, memory, storage, etc...) grow evenly or are some resources growing faster than others? 


If you’re considering a hyper-converged solution, check out this whitepaper: The Hyper-Convergence Effect: Do Management Requirements Change?


Solid State Drives


Solid state drives have been around for decades, but over the last few years have really grown with new technology advances (PCIe/NVMe) and the cost of flash has come down dramatically. The promise of SSD is higher performance, better durability, better cooling, and denser form factors.  This has led to claims that hard drives are dead and SSD (flash) is all that is needed in your data center. Is this right for "your" environment? Do you have a need for high performance across your entire environment?  What is your capacity growth and how does it compare to performance growth? Will your applications take advantage of SSDs? Do you have the budget for flash storage across all applications?


If you are considering making a move to Solid State drives, check out this SolarWinds Whitepaper: How do I know my SQL and Virtual Environments are ready for SSD?


Cloud Storage


For years people have been talking about "the cloud" and whether it’s private or public. For this we will talk public clouds. Over the last couple of years we have seen more businesses adopt cloud into their data storage environment. The promise is allowing companies to access their data anywhere, freeing up IT resources, providing scalability to grow your business, and reducing IT costs to name a few. This has led to the claims of everything going to the cloud and that keeping storage "on premise" is not ideal anymore. For many companies, this might be ideal, but is it ideal for "your" environment? What happens if there is an "outage," whether through the cloud provider or your connection to the cloud? Do you have the bandwidth to support your user’s access from an external location? What cloud provider are you using and are you locked in to that provider? How will you manage your data security and protect against attacks to that data?


These are just a few of the "storage technologies" that are currently being "hyped" in the market and each of them have a place in almost all data centers.  However, just because a new technology solves certain data center problems, does not mean it will solve "your" problems. Understanding your problems and where you want to take your business is the best way to be able to move past the "hype" of a new technology and really see the value that it will provide.


Now, what do you think? Is there too much "hype" in the storage market? What storage technology do you think is being over "hyped"?       

A couple of weeks ago, I wrote about how I love documentation and how useful it can be in the future, for you or someone else. I also lamented slightly about how it is hard on most projects to get the time to write good reference documentation. This week, I'm going to discuss one possible way to knock out some of the more time-consuming parts.


The key word is: Design. That is--the "Design" phase of the project. In most projects, regardless of the flavor of SDLC utilized, there is a block of time to be utilized for designing the solution. Since this is work that will be happening anyway, getting some good documentation out of it is simply a matter of writing down what gets discussed, decisions that get made (along with the "why"s, of course), and how the solution(s) will be structured. Chances are these things get written down, anyway, but outside the mindset of their possible use as future reference material. Usually, by its nature, designing a project will produce some useful artifacts; things like high-level architecture diagrams or possibly an ERD or two. If it's a data-integration or BI project, and one is into details, source-to-target mappings are likely generated at this point.


All of these items add up to a decent set of notes for the future, explaining the solution and where it came from. This way, even if no dedicated time can be worked into a project for documentation, useful items can be produced.


I think there's another benefit to this course of action. I have a phrase I use to describe my philosophy on this topic: I don't like to think while I'm writing ETL. This sounds kind of bad on the surface, but what I really mean is this: When it comes time to actually sling some code, I want to be able to concentrate on the mechanical details; Things like doing everything right, following best practices, security concerns, and making things as highly-performing as they can be, all right from the get-go. With the data flows already drawn out and the business rules documented and agreed upon, it is easy to focus on those things.


We had a fairly nice conversation about it being hard to get the time to write documentation when I wrote about it before. Would beefing up the efforts during the design phase help? Are ya'll even able to get good time for the design stage in the first place (I've been in predicaments where that gets axed, too)?

In my fourth post in my tenure as Thwack ambassador for June, I thought I would talk about what appears to be the never ending battle between good and bad. If I can get to the end without mentioning 'cyber' more than that single reference and various different coloured hats, then my work here will be done. The purpose of this post is to hopefully spark some discussion around the topic and I would love to hear what your take is.


Attacks on computer systems are nothing new. The web is full of stories of viruses that seem to go back further and further in time, the more you look. The first I am aware of is the creeper virus, which was realised on ARPANET way back in 1971, before even this oldie was born. Over forty years later and the anti virus vendors have still failed to deliver adequate protection against viruses, trojans, worms and other similar bad things that I will bundle together under the label of malware. The problem doesn't just stop at the deliberately malicious code. Software bugs, interoperability issues between different systems, 'helpful' developer back doors. It seems that no longer has one attack vector been patched up, than another 100 fill its place. Technology has for the longest time been used by both sides to get a leg up on the other.


The fact that technology and our pace of life is advancing at an ever increasing rate means that this cycle is getting ever more frequent. Personally, I feel that this is one of the key reasons why it will never end. That sounds a bit depressing but I am a realist at heart (often confused by the rose tinted spectacle wearing brigade as a sceptic) so I strongly believe that if you follow a number of best practices, some of which I highlighted in my first post (Defence in depth), keep up to date with relevant industry news and events and have a good internal culture including all staff being bought in, good documentation/processes and buy-in from the top down and we work together as a mature community, we give ourselves a better chance of being protected. It's not unreasonable to state that the majority of drive-by attackers will give up and move on if you present a big enough obstacle to penetrate. If you don't offer any real defences though, thinking all is all lost, you will almost certainly experience that as a self-fulfilling prophecy.


Let me know what your thoughts are on my scribbles above and what you think the battlefield will look like in 20 year's time.

"Shadow IT” refers to the IT systems, solutions, and services used by employees in an organization without the approval, knowledge, and support of the IT department. It is also referred to as “Stealth IT.” In its widely known usage, Shadow IT is a negative term and is mostly condemned by IT teams as these solutions are NOT in line with the organization's requirements for control, documentation, security, and compliance. Given that this increases the likelihood of unofficial and uncontrolled data flows, it makes it more difficult to comply with SOX, PCI DSS, FISMA, HIPAA, and many other regulatory compliance standards.



The growth of shadow IT in recent years can be attributed to the increasing consumerization of technology, cloud computing services, and freeware services online that are easy to acquire and deploy without going through the corporate IT department.

  • Usage of Dropbox and other hosted services for storing and exchanging corporate information can be shadow IT.
  • Installation and usage of non-IT-approved software on company-provided devices is also shadow IT. Whether it is installing a photo editing tool, music player, or a pastime game, if your IT regulations are against them, they can also be shadow IT.
  • BYOD, not in accordance with the IT policy, can contribute to shadow IT as IT teams have no way of finding out and protecting corporate data stored on personal devices.
  • Even usage of USB drives or CDs to copy corporate data from corporate devices can be considered shadow IT, if the company’s IT policy has mandated against it.



The foremost challenge is upholding security and data integrity. We can risk exposure of sensitive data to sources outside the network firewall, and also risk letting malicious programs and malware into the network causing security breaches. Some companies take this very seriously and stipulate strict IT regulations which require IT administrator’s access to install new software on employee workstations. Some websites can also be blocked when on the corporate network if there are chances of employees exposing data thereat. These could be social media, hosted online services, personal email, etc.


There have been various instances of compliance violations and financial penalties for companies that have had their customer information hacked due to the presence of intrusive malware in an employee’s system, leading to massive data breaches. Should we even start talking about the data breaches on the cloud? It'll be an endless story.


Additionally, shadow IT sets the stage for asset management and software licensing issues. It becomes an onus on the IT department to constantly scan for non-IT-approved software and services being used by employees, and remove them according to policy.



This is a debatable question because there are instances where shadow IT can be useful to employees. If IT policies and new software procurement procedures are too bureaucratic and time-consuming and employees can get the job done quickly by resorting to use free tools available online, then—from a business perspective—why not? There are also arguments that, when implemented properly, shadow IT can spur innovation. Organizations can find faster and more productive means of doing work with newer and cheaper technologies.


What is your take on shadow IT? No doubt it comes with more bane than boon. How does your organization deal it?

Fresh out of high school, I got a job working in a large bank.  My favorite task was inputting sales figures into a DOS based system and watching it crash when I tried to print reports. I extended my high school computing knowledge by working over the phone with second level support. I confessed that I could understand what they were doing, but I wouldn’t know where to start.


They saw some raw potential in me and invited me onto an IT project to roll out new computers to the bank branches nationwide. I was to learn everything on the job: DOS commands, device drivers, Windows NT architecture, TCP/IP addressing, token ring networks, SMTP communications and LDAP architecture. My first IT task was cloning computers off an image on a portable backpack, when the registry didn’t exist & Windows was just a file copy away.


Very little was done with a GUI. The only wizard was Windows Installer. When you learn about IT from the ground up and you understand the concepts, you can then troubleshoot. When things didn’t work (and you couldn’t just google the error) I could apply my knowledge of how it should be working, know where to start with some resolution steps and ‘trial and error’ my way out of it. Now, that knowledge means I can cut through the noise of thousands of search results and apply ‘related but not quite the same’ scenarios until I find a resolution.


More than one person has commented to me that this generation of teenagers doesn't have the same tech-savvy skills as the previous one, because they are used to consumer IT devices that generally just work. So, in a world where technology is more prevalent than ever, do we get back to basics enough? Do we teach the mechanics of it all to those new to the industry or to someone looking for a solution? Or do we just point them to the fix?

Data people and developers don't really get along very well. There have been lots of reasons for this historically, and, I'm guessing, as time goes on, there will continue to be conflict between these two groups of people. Sometimes these disagreements are petty; others are more fundamental. One of the areas I've seen cause the most strife is shared project work using an Agile software development lifecycle. I know talking about Agile methodologies and data-related projects/items in the same sentence is a recipe for a serious religious battle, but here I want to keep the conversation to a specific couple of items.


The first of these two items is what happens when an application is developed using an ORM and a language that allow the dev team to not focus on the database or its design. Instead, the engineer(s) only need to write code and allow the ORM to design and build the database schema underneath. (Although this has been around for longer than Agile processes have been, I've seen a lot more of it on Agile projects.) This can lead to disconnects for a Development DBA-type person tasked with ensuring good database performance for the new application or for a Business Intelligence developer extracting data to supply to a Data Mart and/or Warehouse.


Kind of by its nature, this use of an ORM means the data model might not be done until a lot of the application itself is developed…and this might be pretty late in the game. In ETL Land, development can't really start until actual columns and tables exist. Furthermore, if anything changes later on, it can be a lot of effort to make changes in ETL processes. For a DBA that is interested in performance-tuning new data objects/elements, there may not be much here to do--the model is defined in code and there isn't an abstraction layer that can "protect" the application from changes the DBA may want to make to improve performance.


The other problem affects Business Intelligence projects a little more specifically. In my experience, it's easy for answers to "why" questions that have already been asked to get lost in the documentation of User Stories and their associated Acceptance Criteria. Addressing "why" data elements are defined the way they are is super-important to designing useful BI solutions. Of course, the BI developer is going to want/need to talk to the SMEs directly, but there isn't always time for this allotted during an Agile project's schedule.


I've found the best way to handle all this is focusing on an old problem in IT and one of the fundamental tenants of the Agile method: Communication. I'll also follow that up with a close second-place: Teamwork. Of course, these things should be going on from Day 1 with any project…but they are especially important if either item discussed above are trying to cause major problems on a project. As data people, we should work with the development team (and the Business Analysts, if applicable) from the get-go, participating in early business-y discussions so we can get all of the backstory. We can help the dev team with data design to an extent, too. From a pure DBA perspective, there's still an opportunity to work on indexing strategies in this scenario, but it takes good communication.


Nosing into this process will take some convincing if a shop's process is already pretty stable. It may even involve "volunteering" some time for the first couple projects, but I'm pretty confident that everyone will quickly see the benefits, both in quality of project outcome and the amount of time anyone is "waiting" on the data team.


I've had mixed feelings (and results) working this type of project, but with good, open communication, things can go alright. For readers who have been on these projects, how have they gone? Are data folks included directly as part of the Agile team? Has that helped make things run more smoothly?

IT pros now have the added responsibilities of having to know how to troubleshoot performance issues in apps and servers that are hosted remotely, in addition to monitoring and managing servers and apps that are hosted locally. This is where tools like Windows Remote Management (WinRM) come handy because it allows you to remotely manage, monitor, and troubleshoot applications and Windows server performance.


WinRM is based on Web Services Management (WS-Management) which uses Simple Object Access Protocol (SOAP) requests to communicate with remote and local hosts, multi-vendor server hardware, operating systems, and applications. If you are predominately using a Windows environment, then WinRM will provide you remote management capabilities to do the following:

  • Communicate with remote hosts using a port that is always open by firewalls and client machines on a network.
  • Quickly start working in a cloud environment and remotely configure WinRM on EC2, Azure, etc. and monitor the performance of apps in such environments.
  • Ensure smoother execution and configuration for monitoring and managing apps and servers hosted remotely.


Configuring WinRM

For those who rely on PowerShell scripts to monitor applications running in remote hosts, you will first need to configure WinRM. But, this isn’t as easy as it sounds. This process is error prone, tedious, and time consuming, especially when you have a really large environment. In order to get started, you will need to enable Windows firewall on the server you want to configure WinRM. Here is a link to a blog that explains step-by-step how to configure WinRM on every computer or server. Key steps include:



Alternative: Automate WinRM Configuration

Unfortunately, manual methods can take up too much of your time, especially if you have multiple apps and servers. With automated WinRM configuration, remotely executing PowerShell scripts can be achieved in minutes. SolarWinds Free Tool, Remote Execution Enabler for PowerShell, helps you configure WinRM on all your servers in a few minutes.

  • Configure WinRM on local and remote servers.
  • Bulk configuration across multiple hosts.
  • Automatically generate and distribute certificates for encrypted remote PowerShell execution.


Download the free tool here.


How do you manage your servers and apps that are hosted remotely? Is it using an automated platform, PowerShell scripts, or manual processes? Whatever the case, drop a line in the comments section.

In my previous blog, I discussed the difficulties of manual IP address management (IPAM). Manual management can result in poor visibility, inefficient operations, compromised security, and the inability to meet audit requirements for compliance. Many of the comments on the blog swayed towards shifting/using an automated solution. Here are 4 basic best practices for role delegation as an essential criteria for efficient IPAM.


Effective distribution of responsibility across, and within teams: Access control is an important consideration when multiple users have access to the IPAM system.

As IP administration operations touch several teams, it is recommended to:

  • Distribute tasks based on responsibilities and expertise of teams and individuals.
  • Securely delegate IP management tasks to different administrators without affecting current management practices.
  • Avoid bottlenecks, inefficiencies and errors while configuring different systems, accessing an available IP, or while making DHCP/DNS changes.

For example, the server team can delegate management of DNS/DHCP and IPAM functions to the network team while keeping control of the rest of the Windows server functionality. Network teams in turn can divide responsibilities based on the location or expertise within the group and delegate even simpler tasks, like new IP address assignments to the IT Helpdesk.

Different admins have unique role-based control: Role-based control helps ensure secure delegation of management tasks. Various role definitions permit different levels of access restrictions and also help track changes. This way you can maintain security without limiting the ability to delegate required IP management activities. Some examples of role-based control are:

  1. Administrator role or the Super User - full read/write access, initiate scans to all subnets, manage credentials for other roles, create custom fields, and full access to DHCP management and DNS monitoring.
  2. Power Users - varied permissions/access rights restricted to managing subnets and IP addresses only, management of supernet and group properties, and creation of custom data fields on portions of the network made available by the site administrator.
  3. Operator - access to the addition/deletion of IP address ranges and the ability to edit subnet status and IP address properties on the allowed portions of the network.
  4. Read Only Users - have only read access to DHCP servers, scopes, leases, reservations, and DNS servers, zones, records.
  5. Custom access - where the role is defined on a per subnet basis. DHCP and DNS access depends on the Global Account setting.
  6. Hide - Restrict all access to DHCP & DNS management.

Ultimately, control lies with the super user who can assign roles as per the needs and requirements of the network or organization.

Administering and inheriting rights: Setup and assignment of roles need to be easy and less time consuming. The effectiveness of an IPAM lies in the ease of management of the system itself. Many automated IPAM solutions are integrated with Windows Active Directory (commonly used in networks) making it easier to create and assign user roles for IPAM. Built-in role definitions help quickly assign and delegate IPAM tasks to different users.

Change approval or auditing: Compliance standards require that all changes made to the IP address pool be recorded and change history for IP addresses be maintained. Any change in the IP management structure of IP address, DHCP & DNS management must be logged separately, and maintained centrally.

A permissioned access system ensures that only approved/authorized personnel are allowed to make changes to IP address assignments. Ideally, an IP management system should allow administrative access to be delegated by subnet.


Maintaining a log for changes helps avoid errors and also simplifies the process of troubleshooting and rollback of unintended administrative changes. Automated IPAM solutions enable auditing by recording every change in the database along with the user name, date, time of modification, and details of the change. The audit details are published as reports and can be exported, emailed or retrieved as required for further analysis. Some examples of these reports are: Unused IP Addresses, Reserved-Static IP Addresses, IP Usage Summary, etc.


In conclusion, it’s quite clear that manual managing IP addresses can be a resource drain. On the other hand, investing in a good IPAM solution provides you with effective IPAM options. More importantly, tangible business benefits, including a compelling return on investment.

Do you agree that role delegation does help ease some load off the network administrator’s back?


Death by social media

Posted by Vegaskid Jun 15, 2015

In my last article, The human factor, I discussed how you could have the most secure technology that currently exists in place and that could all amount to nothing if an attacker can persuade one of your users to do their bidding. In this article, I want to focus on a particular topic that fits in nicely, social media. There are apparently as many definitions of social media as there are people who use it but in the context of this article, I am referring to online services that people use to dynamically share information and media. Examples of such services include Facebook, Twitter, Instagram and YouTube.


The world has certainly changed a lot in the last 10 years when these kind of services really took off. There has been a massive culture shift from people sharing things via snail mail, to email, to social media. Most businesses have a presence across a number of social media sites as applicable and the vast majority of workers expect to be able to use them for personal use whilst at work. I could go on a rant here about the business risk caused by the lost productivity as social media addicts check in to their accounts every few minutes, but I don't want to be a party pooper. Instead, I will use my shield of security to justify why access to social media, especially from work computers but also on personal equipment on the office network if you have a BYOD policy, presents a risk to businesses that can be difficult to mitigate against.


Why? It goes back to the theme of my last post. People. There was a time when we seemed to be winning the battle against the bad guys. Most people (even my Dad!) knew not to be clicking on URLs sent in emails without going through a number of precursory checks. With the previously mentioned culture shift, we have now become so used to clicking on links that our friends and family post on social media that I doubt if the majority of people even stop to think about what they are just about to click on.


Consider that people who are active on social media are checking their various feeds throughout the day and you have a recipe for disaster just simmering away, ready to boil over. If you have a loose BYOD policy, or are one of those organisations that gives users local admin accounts (ya know, just to make it easier for them to do their jobs), or your training programme doesn't include social media, then you are opening yourself up to a massive risk.


I used to have a colleague many years ago who, having witnessed somebody at work send a trick URL to another colleague which got that person in hot water, told me "you are only ever one click away from being fired". That's a pretty harsh take, but perhaps "you are only ever one click away from data loss" might be a better message to share across your company.


As always, I'm really keen to hear your thoughts on the topic of today's post.

Information security is important to every organization, but when it comes to government agencies, security can be considered the priority. A breach or loss of information held by federal agencies can lead to major consequences that can even affect the national and economic security of the nation.

The Defense Information Systems Agency (DISA) is a combat support agency that provides support to the Department of Defense (DoD), including some of its most critical programs. In turn, this means that DISA must have the utmost highest possible security for networks and systems under its control. To achieve this, DISA developed Security Technical Implementation Guides (STIGs), which is a methodology for secure configuration and maintenance of IT systems, including network devices.  The DISA STIGs have been used by the Department of Defense (DoD) for IT security for many years.


In 2002, Congress felt civilian agencies weren’t making IT security a priority, so to help civilian agencies secure their IT systems, Congress created the Federal Information Security Management Act (FISMA). This act requires that each agency implement information security safeguards, audit them, as well make an accounting to the President’s Office of Management and Budget (OMB), who in turn prepares an annual compliance report for Congress.


FISMA standards and guidelines are developed by the National Institute of Standards and Technology (NIST). Under FISMA, every federal civilian agency is required to adopt a set of processes and policies to aid in securing data and ensure compliance.


Challenges and Consequences:


Federal agencies face numerous challenges when trying to achieve or maintain FISMA and DISA STIG compliance. For example, routinely examining configurations from hundreds of network devices and ensuring that they are configured in compliance with controls can be daunting, especially to agencies with small IT teams managing large networks. Challenges also arise from user errors too, such as employees inadvertently exposing critical configurations, not changing defaults, and employees having more privileges than required. Non-compliance can also have fatal consequencesnot just sanctions, but there is the weakening or threat to national security, disruption of crucial services used by citizens, and significant economic losses. There are multiple examples of agencies where non-compliance has resulted in critical consequences. For example, a cyber-espionage group named APT1 had compromised more than 100 companies across the world and stolen valuable data related to organizations. Some of this information includes: business plans, agendas and minutes from meetings involving high-ranking officials, manufacturing procedures, e-mails as well as user-credentials and network architecture information.




With all that said, NIST FISMA and DISA STIGs compliance for your network can be achieved through three simple steps.


1. Categorize Information Systems:

An inventory of all devices in the network should be created and then devices must be assessed to check whether they’re in compliance or not. You should also bring non-compliant devices to a complaint baseline configuration and document the policies applied.


2. Assess Policy Effectiveness:

Devices should continuously be monitored and tracked to ensure that security policies are followed and enforced at all times. Regular audits using configuration management tools should be used to assess policy violations. Further, using penetration testing can help evaluate the effectiveness of the policies enforced.


3. Remediate Risks and Violations:

After all security risks or policy violations are listed, apply a baseline configuration that meets recommended policies or close each open risk after it has been reviewed and approved. Once again, the use of a tool to automate review and approval can speed the process of remediation.


In addition to following these steps, using a tool for continuous monitoring of network devices for configuration changes and change management adds to security and helps achieve compliance.


If you are ready to start with your NIST FISMA or DISA STIG implementation and need an even deeper understanding on how to achieve compliance, as well as how to automate these processes with continuous monitoring, download the following SolarWinds white paper: “Compliance & Continuous Cybersecurity Monitoring.


But for those of you who would like to test a tool before deploying it into the production network, SolarWinds Network Configuration Manager is a configuration and change management tool that can integrate with GNS3, a network simulator. For integration information, refer to this integration guide here for details:


Happy monitoring!

A few months ago, SolarWinds asked me to evaluate their IPAM product and to see how it compares to Microsoft’s IPAM solution that is built into Windows Server 2012 and Windows Server 2012 R2. In doing so, I constructed a multi-forest environment and worked through a feature by feature comparison of the two tools.


Obviously any third party product should provide functionality beyond that of the built in management tools. Otherwise what’s the point of using a third party management tool? That being the case, I’m not going to bother talking about all of the great functionality that exists within SolarWinds IPAM. I’m sure that the SolarWinds marketing staff could do a better job of that than I ever could.


What I do want to talk about is a simple question that someone asked me while I was attending Microsoft Ignite. That person said that they had heard that Microsoft IPAM really didn’t work very well and wondered if they were better off continuing to use spreadsheets for IP address management.


Here is my honest opinion on the matter:


Microsoft IPAM works, but it has a lot of limitations. For instance, a single Microsoft IPAM instance can’t manage multiple Active Directory forests and Microsoft IPAM does not work with non-Microsoft products. I think that using a third party product such as SolarWinds IPAM is clearly the best option, but if a third party management tool isn’t in the budget and you can live with Microsoft IPAM’s limitations then yes, it will work.


Having said that, there are two more things that you need to know if you are considering using Microsoft IPAM. First, even though it is relatively easy to set up Microsoft IPAM, it can be really tricky to bring your DNS and DHCP servers under management. The process is anything but intuitive and often requires some troubleshooting along the way. In fact, I recently wrote a rather lengthy article for Redmond Magazine on how to troubleshoot this process (this article will be published soon).


The second thing that you need to know is that there is a bit of a learning curve associated with using the Microsoft IPAM console. There are times when you may need to refresh the console without being told to do so. Similarly, there are some tasks that must be performed through the DNS or DHCP management consoles. It takes some time to learn all of the console’s various nuances and you may find that a third party tool makes the management process easier and more efficient.


Many of us use Microsoft Server in a variety of roles, including DHCP and DNS.  Microsoft DHCP and DNS services are provided at no extra cost and do a good job.  Besides, the Active Directory service depends on the use of DNS, so many organizations choose to use the Microsoft DNS services if for no other reason than to support the Active Directory.


Many of us (especially those still using spreadsheets to track IP’s) were excited to see a new IP Address Management utility bundled with Windows Server 2012.  However, Microsoft IPAM is somewhat comparable to many of Windows Server’s other built-in management tools.  That is to say that these tools are functional, but are not necessarily elegant and often suffer from a number of limitations. In fact, the Microsoft IPAM tool has many of the same limitations as the built-in DNS and DHCP management tools.


For example, Microsoft IPAM works great within a single AD forest, but it won’t work across forest boundaries. Organizations that have multiple Active Directory forests can deploy multiple IPAM instances, but these instances are not aware of one another and do not provide any sort of data synchronization.


The Microsoft IPAM console does a decent job of managing Microsoft DHCP servers, but is not designed to replace the DHCP MMC. Unfortunately, the Microsoft IPAM console provides very little DNS functionality, although this is expected to improve over time.


So why does any of this matter? Because your time is worth something.  If the tool is free, but it doesn’t save you time then the solution is not cost effective. This is why some organizations eventually graduate from Microsoft bundled tools to third-party solutions. 


What is your point of view?  Do you use Microsoft IPAM?  Why or why not?  Have you graduated?  If so, to what and why? If you would like to learn more about how Microsoft IPAM may cost you more, view this recorded webinar.


As SolarWinds’ Head Geek, and a somewhat self-respecting IT professional, I try to retain at least some decorum when I post to Geek Speak. But this year I’m bursting at the seams with anticipation for what we have in store for you in this year’s thwackCamp! (Hyperbolic much?)


You, the thwack community, have really embraced thwackCamp- most recently our thwackCamp Holiday thwackTacular 2014 and attendance has substantially grown again.  As a result this year we’re offering two tracks, a real Keynote featuring Nikki and Joel, as well as an DevOps Experts Panel with Caroline McCrory, Dave McCrory, Michael Coté and Matthew Ray.


Between those we’ll have several deep-dive, best practice and how-to sessions on a number of topics regarding both SolarWinds products and IT management in general.  Again this year all topics are coming from you, the amazing IT professional members of thwack.  We think you’ll really like them.


Of course all the thwack camp favorites are back again this year, including lots of giveaways, multiple live chat lobbies with our presenters, and the Head Geeks. I’ll even be doing live member shout-outs and taking some tricky questions on-air.


Let’s do this


First, make sure you have a thwack account!  The Keynote and Experts Panel will be open to all, but the track sessions will only be open to members of the thwack community.


Next, head over to the thwackCamp event page and check out the schedule.  It’s at  See?  It has its own URL now- we’re not messing around this year. (Ok, truthfully you can browse the event catalog even without a login, so technically you can defer step one until you’re ready.)  You may also register for the individual sessions by clicking on them to view their details.  That will make it easy to manage your schedule in the thwack calendar.


Last, be social.  We’ll be keeping an eye out for tweets with the #thwackCamp hashtag, and we’ll retweet great ones.  Or in my case really geeky ones.  Seriously, I just tweeted a pic from Cisco Live of the 7004 core routers in the Cisco NOC with the caption “A little Nexus never hurt anybody”. Get it? It’s the smallest N7000 chassis? Ugh.  I’m pretty sure you can do better.



So get ready, get registered, and then have fun at thwackCamp again this year!

I have a confession: I like documentation.


I know.


But I have good reasons. Well; some reasons are better than others. I like to type (not really a good reason); I like to write (although I'd argue I'm not all that great at it); and I like to have good references for projects I've done that will hopefully be useful to any future person--or myself--who has to support what I've put together (OK, this is actually a good one).


When talking about data projects or DBA-related work, reference documentation can contain a lot of nitty-gritty information. There are two specific areas that I think are most important, however. The first of these is useful in the case of Business Intelligence projects, and usually takes the form of a combination Data Dictionary/Source-to-Target Mapping listing. The other is in the vein of my post from last week wherein I discussed asking the question of "why" during the early stages of a project. Even having only these "why" question and answers written down can and will go a long way towards painting the picture of a project for the future person needing to support our masterpieces.


As good as it truly is for all people involved in a project, writing reference documentation is something that isn't necessarily easy or a lot of fun to do. One of the most frustrating things about it is that in a lot of instances, time to write this documentation isn't included in the cost or schedule of the project; especially, in my experience, if it's following something like the Scrum methodology. In the case of a "day job", spending time on documentation may not have the same immediate need, as the people who participated on the project theoretically aren't going anywhere. Furthermore, people spending that time takes time away from other work they could be doing. In the case of an outside consultancy, paying an outside firm to sit around and write documentation has an obvious effect on the project's bottom line.


If I had to guess, I'd say most of you readers out there aren't as into documentation as much as I am…I really do understand. But at the same time, what do you do for the future? Do you rely on comments in code or something like Extended Properties on database objects?

And, for those of you that do attempt to provide documentation… do you have to make special efforts to allow the time to create it? Any secrets to winning that argument? ;-)


Can you recommend ...

Posted by scuff Jun 10, 2015


That has to be one of my most feared phrases … “Can you recommend …?” Often the person is actually saying “can you make the buying decision for me?” To which the answer is "no, I can’t make up your mind for you. Would you ask me to choose your car or your house?"


The easiest way to keep IT people busy is to lock us in a room until we can agree on the best anti-virus software. 


The problem with technology is there is no ‘Best’. There’s only the ‘best for you’. You can be guided to making that decision based on input from others but here’s the thing … that input is going to be coloured by their experience and the experiences of their peers.


And as technology grows and the capabilities between different products start to blur, you can start to stare at things that basically do the same thing. So how do you choose? Or how do you recommend what’s truly best for the business, putting personal experience aside? Should you put personal experience aside?


Put your hand up if you’ve ever seen a company throw out one software vendor because a new manager has come in and they prefer something else? Exchange versus Lotus Notes. Microsoft versus Google. On premise versus Cloud. Mac vs PC. All of these decisions are influenced by the previous experiences of the decision makers.


My tactic is to squeeze as much information out of them as I can about how they work, what they want to achieve etc and then say “Well this is what I’d do if I was you…”  But I can almost guarantee someone else will have a different opinion. They’re like bellybuttons. Everybody’s got one!


How do you handle the recommendations question? Do you weigh up all of the available options for them or go on past experience?



The human factor

Posted by Vegaskid Jun 8, 2015

In my last article (Defence in depth), I wrote about a number of different approaches that should be considered for a defence in depth security model. In this article, I go in to a little more depth on a topic which is perhaps the most exciting for me, but also one of the hardest to fully mitigate against, the human factor.


Imagine a fantasy world where security vendor's claims that their product can protect you against the bad guy's complete technological arsenal. Every time they try to infiltrate your network, either from the outside or within, they are detected and blocked with no impact on your resources. I did ask you to your imagination! That's an unlikely scenario as I'll discuss in an upcoming post but even if it could come to fruition, if your human resources have not been trained to a suitable level in InfoSec, then there are a host of other attack vectors at the atacker's disposal. The list below outlines some of these:


  • USB drive-by. An attacker drops a USB pen drive in an effective location and a member of staff picks it up and curiosity gets the better of them, leading to them plugging it in to a corporate machine. An effective location could be the car park, reception, the reception area toilet or a popular location nearby where staff like to meet at lunch or after work e.g. cafe, park or bar
  • Phishing email. A phishing email is one that tries to extract information from you. An example would be one that purports to be from your bank saying you need to login and confirm your details. You click on what looks like a legitimate link and are taken to what looks like your bank's login page. What in effect happens is you are directed to a clone of your bank's website that is controlled by the attackers who then get your legitimate details and can then use them to login to your real bank account. That would be a personal attack, but imagine how many vendors, suppliers and partners etc. that your company works for. You probably have logos of lots of them on your corporate website so its easy to find some of this information out
  • Phone calls. An attacker calls your Helpdesk claiming to be the CEO and asks for their password to be reset. Maybe you have a process in place for ensuring the request is legitimate, but what if the attacker starts using guilt and authority to pressure the Helpdesk advisor in to bypassing that process. Next thing, the CEO's email account has been breached and think of the treasure that most likely lies within. Maybe the real CEO made such a hurried request in the last few months and somebody got their fingers burnt for refusing to make the change


This limited list highlights a number of points, the primary one being that your people are the weakest link in your security chain, more often than not. Most people are aware of the types of attacks listed above, so training needs to be clever, not just a once a year exercise to tick a box, but ongoing and done in innovative ways to prevent message fatigue. The last point in particular highlights a big point which is, you need buy in from the top and all the way down. If your CEO needs a password reset in a hurry which breaks protocol, staff should be commended for not complying with that request, no matter how high up it comes from.


I'd love to know if you have any specific tales of the human link being leveraged in an attack.

The speed of technology change is increasing at an alarming rate. Startups can fund, build & deploy software in a shorter timeframes and consumers can install and adopt that quickly and easily. Gone are the days of years between major versions as software vendors roll out updates without relying on shipping out CDs.


So how do IT professionals keep up with it all? How do you find the time to do your job and upskill and understand the latest threat or IT trend being talked about in the news?

Of course, the Thwack community is a great place to discuss announcements and new trends amongst your peers. But how else do you keep pace with technology changes? Here are a few of my favourite tools and sources:


Cloud Business Blueprint podcast - The latest announcements in Cloud computing and tips for running a successful IT services business.


Microsoft’s Channel 9 - Microsoft event presentations including Build and Ignite. Don’t be put off by the msdn tag – there’s a ton on information on performance, security, migrations & upgrades too. They’ve also just released an iOS app


Microsoft Partner Network Blog - Subscribe via email so you don’t miss an announcement, but you can read & delete the ones that aren’t relevant to you.


Twitter heroes: Find the engaging, knowledgeable people in your field. My favorites include @Edbott @troyhunt @mkasanm @shanselman @WinObs @maryjofoley @cfiessinger @tomwarren


Pluralsight - The content in the IT administrator track is outstanding & will leave you wondering when you're going to fit your real work in.


Pocket - I’ve heard people rave about this great ‘read later’ app. I have a great ‘read later’ folder in my Inbox but now it’s not so little. The app is on my to-do list to check out. Any fans of it out there?


OneNote or EverNote - Pick one of these ‘digital scrapbooks’ for saving handy URLs, webinar screenshots & course notes.


While it’s important to have an awareness of the changes in tech, you also need to have a filter. Watching the Microsoft Build keynote highlights, I got excited about Docker and the containerization of apps and I’m not a developer! But I can see the potential. I’ve also seen product announcements where I think “that’s great but I don’t have time for it right now”. Be aware of it, but don’t go deep diving into technologies that aren’t on your radar currently or in the near future or you’ll end up with Bright Shiny Object Syndrome.


Just as importantly though, how are you using the software and tools that you have in your hands right now? Have you taken the time to check out their latest features and apply then to how you work? How has the latest version of X made your more efficient/improved your day/helped you get a better result? Take time this week to investigate one software feature you didn’t know existed. Because if you’re still using Office like you did in 1997, what’s the point in upgrading?


Let me know your favorite blogs, podcasts and websites? How do you manage your information sources?

Leon Adato

Respect Your Elders

Posted by Leon Adato Employee Jun 4, 2015

"Oh Geez," exclaimed the guy who sits 2 desks from me, "that thing is ancient! Why would they give him that?"


Taking the bait, I popped my head over the wall and asked "what is?"


He showed me a text message, sent to him from a buddyan engineer (EE, actually) who worked for an oil company. My co-worker’s iPhone 6 displayed an image of a laptop we could only describe as “vintage”:

(A Toshiba Tecra 510CDT, which was cutting edge…back in 1997.)


"Wow." I said. "Those were amazing. I worked on a ton of those. They were serious workhorsesyou could practically drop one from a 4 story building and it would still work. I wanted one like nobody's business, but I could never afford it."


"OK, back in the day I'm sure they were great," said my 20-something coworker dismissively. "But what the hell is he going to do with it NOW? Can it even run an OS anymore?"


I realized he was coming from a particular frame of reference that is common to all of us in I.T. Newer is better. Period. With few exceptions (COUGH-Windows M.E.-COUGH), the latest version of somethingbe it hardware or softwareis always a step up from what came before.


While true, it leads to a frame of mind that is patently un-true: a belief that what is old is also irrelevant. Especially for I.T. Professionals, it’s a dangerous line of thought that almost always leads to un-necessary mistakes and avoidable failures.


In fact, ask any I.T. pro who’s been at it for a decade, and you'll hear story after story:

  • When programmers used COBOL, back when dinosaurs roamed the earth, one of the fundamental techniques that were drilled into their heads was, "check your inputs." Thinking about the latest version of exploits, be they an SSLv3 thing like 'Poodle', or a SQL injection, or any of a plethora of web based security problems, the fundamental flaw is the server NOT checking its inputs, for sanity.
  • How about the OSI model? Yes, we all know its required knowledge for many certification exams (and at least one IT joke). But more importantly, it was (and still is) directly relevant to basic network troubleshooting.
  • Nobody needs to know CORBA database structure anymore, right? Except that a major monitoring tool was originally developed on CORBA and that foundation has stuck. Which is why, if you try to create a folder-inside-a-folder more than 3 times, the entire system corrupts. CORBA (one of the original object-oriented databases) could only handle 3 levels of object containership.
  • Powershell can be learned without understanding the Unix/Linux command line concepts. But, it’s sure EASIER to learn if you already know how to pipe ls into grep into awk into awk so that you get a list of just the files you want, sorted by date. That technique (among other Unix/Linux concepts) was one of the original goals of Powershell.
  • Older rev's of industrial motion-control systems used specific pin-outs on the serial port. The new USB-to-Serial cables don't mimic those pin-outs correctly, and trying to upload a program with the new cables will render the entire system useless.


And in fact, that's why my co-worker's buddy was handed one of those venerable Tecra laptops. It had a standard serial port and it was preloaded with the vendor's DOS-based ladder-logic programming utility. Nobody expected it to run Windows 10, but it fulfilled a role that modern hardware simply couldn't have done.


It's an interesting story, but you have to ask: aside from some interesting anecdotes and a few bizarre use-cases, does this have any relevance to our work day-today?


You bet.


We live in a world where servers, storage, and now the network is rushing toward a quantum singularity of virtualization.


And the “old-timers” in the mainframe team are laughing their butts off as they watch us run in circles, inventing new words to describe techniques they learned at the beginning of their career; making mistakes they solved decades ago; and (worst of all) dismissing everything they know as utterly irrelevant.


Think I’m exaggerating? SAN and NAS look suspiciously like DASD, just on faster hardware. Services like Azure and AWS, for all their glitz and automation, aren't as far from rented time on a mainframe as we'd like to imagine. And when my company replaces my laptop with a fancy "appliance" that connects to Citrix VDI session, it reminds me of nothing as much as the VAX terminals I supported back in the day.


My point isn't that I’m a techno-Ecclesiastes shouting "there is nothing new under the sun!" Or some I.T. hipster who was into the cloud before it was cool. My point is that it behooves us to remember that everything we do, and every technology we use, had its origins in something much older than 20 minutes ago.


If we take the time to understand that foundational technology we have the chance to avoid past mis-steps, leverage undocumented efficiencies built into the core of the tools, and build on ideas elegant enough to have withstood the test of time.

Got your own "vintage" story, or long-ago-learned tidbit that is still true today? Share it in the comments!

Over the last few months, I've spent a lot of time talking to different people about the importance of the "B" in BI--as in "Business Intelligence" (stay with me, DBAs, I'll get you roped in here soon enough). I joke that that "B" is there for a reason and it's no accident that it's the first letter in the acronym. The point I make is that the hard part of BI projects isn't deciding what technology to use or working through hurdles that come from questionable data, but instead, understanding the Business needs that got the project started in the first place.


Or, put another way, making sure that the "Why?" questions are asked first and foremost, and that the answers to those questions are used to ask better "What?" questions on down the line while the solution is designed, developed, and implemented.


This works well when the word "Business" is directly included in the type of data work I do, but this applies just as readily to plain ol' DBA work, as well. Everything from data modeling considerations to planning big database consolidation projects need to start out by asking and understanding the "Why"s coming from our business leaders and our userbase. Sometimes these questions are easy--"Why do you want to be able to restore yesterday's database backup?" Other times they might be hard to get answers to--"Why is this KPI calculated three different ways in this one report?" (true story!). Chances are, asking these questions will have some kind of monetary impact. For a BI project, it may cause the project to complete quicker with less re-work late in the game, while the need to upgrade some old, unreliable hardware may lead to a server virtualization project for a DBA team.


I've seen both BI and DBA projects go sideways because the right questions weren't asked early enough in the project lifecycle; conversely, I've worked with some great BAs who are fantastic at understanding the business rationale and ensuring we, the technology team, all did, too.


What kind of save-the-day stories do you have that started out by asking "Why?"


Are You Pi Shaped?

Posted by jtroyer Jun 2, 2015

We’re well into 2015. How is your year going - and how are you preparing for the rest of your career?  Earlier this year, four of us got together and had a great talk about technology predictions for 2015.


One of the things I talked about was becoming a “Pi-shaped expert.” You might ask, “What’s a pi-shaped expert?” Well, it’s all about how you develop your skills and your career. If you read advice on skills and jobs, you will often see advice on being a “T-shaped expert.”


One of the things we struggle with in IT is in being a generalist vs. being a specialist. There are lots of ways of looking at this: Kong Yang has written on Geek Speak about Generalists, Versatilists, and Specialists, which is a different window into this same issue. But sticking with our letter-shaped model, let's define the T-shaped expert. This kind of person has a broad range of skills which they’re reasonably familiar with; that’s the top of the “T”. Maybe for you it's Windows, networking, security, being really good at writing requirements documents, scripting, and Exchange. And then the T-shaped expert also has the stem of the “T”—the one skill area in which they're an expert. Maybe for you it's VMware and virtualization).


The T-shape is what gets you the good job. Generalists are where we all start, but generalists in IT tend to get stuck in smaller shops doing everything and are often underpaid and overworked. More advanced jobs and bigger environments require—and pay for—experts in a given area, but who can pinch hit in other roles.


So being a T-shaped expert is the best way to get a good job, move to a better situation, and have a good career. At least for a while. If you became the SAN expert or the VMware expert ten years ago, you have done well in the ensuing decade. But if you’re *still* the SAN expert or the VMware expert ten years later, and that’s all you are bringing to the table, you should be a little concerned. The SAN of 2015 is a lot easier to manage, and in many emerging environments in the cloud or with hyperconverged infrastructure, there’s not even a SAN to manage. Your T-shaped skill simply isn’t as valuable.

PI Shaped Expert.png

This is where the Pi-shape comes in. While you are still working in that T-shaped job with your T-shaped pay, you need to be building new interests and new skills. You should be using small projects at work as well as side projects outside work to develop another leg on your “T,” making it a “Pi.”


That new skill in 2015 might be something like automation and orchestration, configuration management, DevOps, or containers. Or perhaps it’s expertise in hyperconverged infrastructure, software-defined networking, or public cloud. Or it could be going deeper into application performance management and relating back to DevOps and Continuous Integration. It should be something that you find interesting and something that is becoming increasingly relevant now and growing in the future. As you develop this second leg of your “Pi,” you open up new opportunities in the future.


IT is about continuous learning. Don’t get stuck being a generalist. But also don’t get stuck with one deep skill that’s stuck in past. You probably have decades more left in your career. Always be stepping to a new stone in the river by becoming a Pi-shaped expert.


Defence in depth

Posted by Vegaskid Jun 1, 2015

Ever since I started working in IT and took an interest in the Information Security aspect, I have heard the term 'defence in depth' being bandied around, qualified to varying degrees. In short, defence in depth is an approach where you have different security controls at different places in your overall system. It is also referred to as the castle approach, harking back to days of yore. In those days, having tall and thick walls was not always enough. You wanted to ensure there was only one entrance in your castle (ignoring the fact you might have several back doors for quick escapes!), a moat to protect your perimeter and a draw bridge to only allow authorised persons to come across. Once in, they would often need to relinquish their weapons and the upper class people would often live in the middle of the grounds for further protection.



In the world of IT, the same approach is realised through the following, non-exhaustive list:



  • Firewalls. This is the drawbridge. Only allow traffic from the right sources, going to the right destinations. Everything else gets left outside the gate
  • IPS/IDS. You almost certainly want the Internet to get to your public web server, but if somebody out there is trying to attack a weakness in your application, a basic L3/L4 firewall won't cut the mustard. You need something that can look in to the application traffic and determine if something untoward is happening. It can either drop the traffic, slow it down, send you an alert or even launch a counter attack depending on your setup
  • Patching. Vulnerabilities in your operating system, middle-ware and applications can often be mitigated against by a security device such as an IPS, but what if the attacker is already in your network, behind this layer of protection? It is extremely important that you keep all your systems patched and have a rock solid patching policy that is adhered to. This not only refers to servers, but network devices, storage and any other device that your IT relies on
  • Physical. Where is your data physically kept? On a machine under your desk? In a comms room somewhere at your head office? Or maybe in a tier 3 data centre? You could have the best of breed security devices at every level in your network but if you leave important print outs on your desk or like taking home some key files for the board on an unencrypted USB key, you are negating all the protection that they offer
  • Policies. It's all very well talking the talk, but if you don't have all of these steps and processes documented somewhere, people won't even remember that there is a way they are supposed to be doing something or you'll end up with 10 engineers doing something in 10 different ways in a vague attempt to comply. Get buy in from senior management and create a culture of security that people will not try to circumvent. Which leads to my next point...
  • Training. InfoSec training can traditionally be very dry and usually comes from a "let's plough through this stuff for another year" angle. That is because the people doing the training are often from an InfoSec compliance background rather than Security Operations and its a box ticking exercise, rather than an attempt to really engage people to be thinking about InfoSec all year round



The list above is brief and incomplete, but you can see that even in that list, there is a broad range of areas that need addressing to really give good protection.



My question to you now is, how good is your approach to information security? Have you worked at companies that have ignored most of these well known approaches? Have others been a shining beacon of how to protect your treasured resources? I look forward to hearing your thoughts and experiences.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.