Skip navigation
1 15 16 17 18 19 Previous Next

Geek Speak

1,894 posts

I've had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company's journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.


Blog Series

One Company's Journey Out of Darkness, Part I: What Tools Do We Have?

One Company's Journey Out of Darkness, Part II: What Tools Should We Have?

One Company's Journey Out of Darkness, Part III: Justification of the Tools

One Company's Journey Out of Darkness, Part IV: Who Should Use the Tools?

One Company's Journey Out of Darkness, Part V: Seeing the Light

One Company's Journey Out of Darkness, Part VI: Looking Forward


Throughout this series I've been advocating the formation of a tools team, whether it is a formalized group of people or just another hat that some of the IT team wears. This team's task is to maximize the impact of the tools that they've chosen to invest in. In order to maximize this impact, understanding who is using each tool is a critical component of success. One of the most expensive tools that organizations invest in is their main network monitoring system.  This expense may be in the CapEx spent obtaining the tool or the sweat equity put in by someone building out an open source offering, but either way these dashboards require significant effort to put in place and demand effective use by the IT organization. Most of IT can benefit from these tools in one way or another, so having Role Based Access Controls to these platforms is important so that this access may be granted in a secure way. Screens should be highly visible so that everyone in the office can see them.


Network Performance Monitoring

NPM aspects of a network management tool should be accessible by most if not all teams, although some may never opt to actually use it. Outside of the typical network team, the server team should be aware of typical throughput, interface utilization, error rates, etc. such that the team can be proactive in remediation of issues. Examples where this has come in useful include troubleshooting backup related WAN congestion issues and usage spikes around anti-virus updates in a large network. In both of these cases, the server team was able to provide some insights into configuration of the applications and options to help remedy the issue in unison with the network management team. Specific roles benefiting from this access include: Server Admins, Security Admins, WAN Admin, Desktop Support


Deep Packet Inspection/Quality of Experience Monitoring

One of the newer additions to NMS systems over the years has been DPI and its use in shedding some light on the QoE for end users. Visibility into application response time can benefit the server team and help them be proactive in managing compute loads or improving on capacity. Traps based on QoE variances can help teams responsible for specific servers or applications provide better service to business units. Specific roles benefiting from this access include: Server Admins, Security Admins, Desktop or Mobile Support


Wireless Network Monitoring

Wireless has outpaced the wired access layer as the primary means of network connectivity. Multiple teams benefit from monitoring the air space ranging from security to help desk and mobile support teams. In organizations supporting large guest networks - health care, universities, hotels, etc. the performance of your wireless network is critical to the public perception of brand. Wireless networks monitoring now even appeals to customer service or marketing teams. This addition to non-IT teams can improve overall communications and satisfaction with the solutions. For teams with wireless voice handsets, telecom will benefit from access to wireless monitoring. In health care, there is a trend to develop a mobile team as these devices are critical to the quality of care. These mobile teams should be considered advanced users of wireless monitoring.


IP Address Management (IPAM)

IPAM is an amazing tool in organizations that have grown organically over the years. Using my customer as a reference, they had numerous /16 networks in use around the world, however many of these were disjointed. This disjointed IP addressing strategy leads to challenge from an IP planning standpoint, especially for any new office, subnet, DMZ, etc. I'd advocate read only access for help desk and mobile support teams and expanded access for server and network teams. Awareness of an IPAM solution can reduce outages due to human error and provides a great visual reference as to the state of organization (or lack there of) when it comes to a company's addressing scheme.

 

I personally do not advocate an environment that promotes read-only access for anyone interested in these tools as the information held within these tools should be secure as they would provide the seeds for a well planned attack if so desired. Each individual given access to these tools should be made aware that they are a job aide and carry a burden of responsibility. Also, I've worked with some organizations looking for very complex RBAC for their management teams, unless you have an extremely good reason, I'd shy away from this as well as the added complexity generally offers very little.

I've had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company's journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.


Blog Series

One Company's Journey Out of Darkness, Part I: What Tools Do We Have?

One Company's Journey Out of Darkness, Part II: What Tools Should We Have?

One Company's Journey Out of Darkness, Part III: Justification of the Tools

One Company's Journey Out of Darkness, Part IV: Who Should Use the Tools?

One Company's Journey Out of Darkness, Part V: Seeing the Light

One Company's Journey Out of Darkness, Part VI: Looking Forward


As organizations roll out network management software and extend that software to a number teams they begin to gain additional insights that weren't visible before. These additional insights enable the business to make better decisions, recognize more challenges and/or inefficiencies, etc.

 

For this customer one of the areas in which we were able to vastly improve visibility had to do with the facilities team. This manufacturing site has its own power station and water plant among other things to ensure that manufacturing isn't ever disrupted. In working on other projects with the team, it became obvious that the plant facilities team was in the dark about network maintenance issues, etc. This team would mobilize into "outage mode" whenever the network was undergoing maintenance. After spending time with this team and understanding why they had to react the way that the do, we were able to extend a specific set of tools to them that would make them aware of any outages, give them insight into when/why certain devices were offline, and provide visibility into when the network would come back online. This increased awareness of their needs, combined with additional visibility from network tools has reduced the average cost of an outage significantly as well as solved some communication challenges between various teams. We were also able to give them a dashboard that would help discern between network and application level issues.

This is a brief of example as to how we can all start to build the case for network management tools and do so in a business relevant way. Justifying these tools has to be about the business rather than simply viewing red/yellow/green or how hard a specific server is working. A diverse team can help explain the total business impact better than any single team could. For admins looking to get these tools look for some of these business impacting advantages:


Reduced Downtime

We always seem to look at this as network downtime, however as in the example above there are other downtime issues to be aware of and all of these can impact the business. Expanding the scope of network related issues can increase the perceived value of any networking tool. Faster time to resolution through the added visibility is a key contributor to reduced downtime. Tools that allow you to be proactive also have a very positive effect on downtime.


Supportability

This seems rather self explanatory, however enabling helpdesk to be more self-sufficient through these tools can reduce the percentage of escalated tickets. These tickets typically carry a hefty price and also impact the escalations team to work on other issues.


Establish and Maintain Service Level Agreements

Many organization talk about SLAs and expect them from their carriers, etc. but how many are offering this to their own company? I'd argue very few do this and it is something that would benefit the organization as a whole. An organization that sees IT as an asset will typically be willing to invest more in that group. As network admins, we need to make sure we are providing value to the company. Predictable response and resolution times are a good start.


Impact on Staff

Unplanned outages are a massive drain on resources from help desk to admins to executives, everyone is on edge. These also often carry the financial impacts of overtime, consulting fees, etc. in addition to some of the intangibles like work/life balance, etc.

Can server monitoring be configured in a way that it is effective? Is there such a thing as a monitoring project gone right?  In my experience this is rare that a team gets what they want out of their monitoring solution, but rest assured it is possible with the right level of staffing and effort.

  

Monitoring Project Gone Right

As many of us know, server monitoring is very important to ensure that our business systems do not fail, and that our users are able to do their jobs whenever they need to.  When we are supporting hundreds and possibly even thousands of servers in our enterprises, it would be impossible to do this manually.  The right underlying system is the key to success.  When we are handed a pager (yes, there was a time when we all had pagers) we want to know that the information that comes through is real and actionable.  Throughout my entire career, I have worked only one place that I feel did monitoring really well.  I did not fall ill from being worn down and woken up from pages that were not actionable when I was on-call.  I could actually be certain that if my pager went off in the middle of the night, it was for true purpose.


Steps to Success

So what is the recipe for successful monitoring of your servers? Let’s take a look at how this can be done.


  • Make sure this is a real project with dedicated infrastructure resources.  This will not only allow for development of skill-sets, it will ensure that the project will be completed on a schedule.
  • Put together a Playbook which serves multiple purposes:
    • Provides a detail list of the server monitoring thresholds and commitments for your servers
      • Document any exceptions to the standard thresholds defined
    • Limit the number of core application services monitored to reduce complexity
    • Allows your application owners to determine which software “services” they will want monitored 
    • Allows the application owner to decide what action should be taken if a service fails (i.e. page application owner, restart service, page during business hours only)
  • Make sure you are transparent and work with ALL of IT.  This project requires input from all application owners to ensure that the server monitoring team puts it together properly.
  • Revisit the playbook on a predefined interval to ensure that the correct system monitoring and actionable response is still in place.
  • Refer to “Server Monitoring from the Real World Part 1” for some additional thoughts on this topic.


This may sound like a lot of work, but ensuring that every service and threshold monitored has an actionable response is imperative success in the long-term.  In the end, this approach will actually significantly reduce the amount of effort and resources required to ensure that monitoring is everything your business needs to run smoothly.

 

Concluding Thoughts

System monitoring done correctly is important for both the business and the engineers on your team.  When it is setup correctly with actionable responses, your team will not “tune out” their pages, and will ensure that the quality of service provided to the business is stellar.  Server and application uptime will also be at their best.

If you think you have a good handle on what’s available in the Cloud, think again. Not that I doubt your knowledge for one minute, but what I am sure of is the rapid pace of change in Cloud services, especially from Microsoft. The Cloud that you investigated 12 months or 6 months ago might now have service offerings that you’re not aware of. Nothing demonstrates this increased pace of technology change as well as Office 365 does. So let’s look at some of the new Office 365 features that have been introduced in the last 12 months. Because it’s a whole lot more than just hosted Exchange and SharePoint.

 

Delve – Search on steroids. Traditionally if you wanted to find some information, you’d go to the place where that kind of data was kept and use that product’s search function (eg within Outlook or File Explorer). Delve sits instead an Office 365 tenant and analyses all of the data you have access to in one search pane. Information is beautifully presented, whether it was in your mail file, a shared mailbox you have access to or a SharePoint document library. This breaks information out of silos and is especially handy for highlighting information provided by colleagues and stored in places you may not have thought to look (as long as you have access).

 

Sway – A new way to tell your story. Also available as a standalone product, Sway is now integrated into Office 365, appearing in the app picker. Labelled a ‘digital storytelling app’, some have wondered if this will bring an end to death by PowerPoint. Sadly, it lacks the ‘clicker’ integration for delivering a live presentation. What does shine is the automated information layouts that bring beautiful ‘readability’ without you needing to be a website developer.

 

Power BI – Display & query your data like never before. OK so this one’s a little older than 12 months and is also available as a standalone product. The Office 365 power comes from using this powerful tool to display stunning representations of your data real-time inside your SharePoint Online portal. Natural language query lets you filter and redisplay that data. You really need to see it to believe it, but image your sales team seeing live sales data plotted in 3D columns on a map of the country and being able to drill down to find the best selling city for widget X versus widget Y … all without touching a spreadsheet or ERP system.

 

Office 365 Groups – Groups, but not as you know them. This is truly an Office 365 innovation, not available anywhere else. They’re not security groups and they aren’t mailing groups. Office 365 Groups bring a collection of team resources to one place for you to access (emails, shared calendar, shared documents etc.) The most brilliant part is that a new person to the ‘team’ (Group) gets access to all of the historical data including previous emails sent amongst the group. This works well for team members that don’t all work in the same office, but is only available to users within your Office 365 tenant.

 

Office 365 Planner – Team task management. This is the newest baby, not available in even First Release tenants until next quarter. Office 365 Planner is what you get if Project and Pinterest had a baby. It’s not designed for complicated projects, but it provides a great team space for task creation, allocation and updates.

 

Mobile Device Management – BYOD fans are going to love this. While Mobile Device Management has been an Enterprise offering in the past, now an Office 365 subscriber can take advantage of some of these capabilities. Under the hood it will look strangely like Windows Intune (because it is). Bringing it to Office 365 makes it affordable for more organisations, but the killer feature is the ability to selectively wipe enrolled devices.  So when staff have an Office 365 license on their phone, not only can you enforce passcodes if you wish but you can delete all synced files and emails if their phone is lost .. without wiping out the family photos in their camera roll.

 

Additional administration and protection features. Office 365 now lets administrators configure self service password reset for users who have a secondary validation method (eg a mobile number or alternative email address). Custom branding is also available, to make people feel like they are logging onto one of the company’s systems. And data loss prevention has now extended beyond just email and is available in SharePoint Online and OneDrive for Business, suggesting or blocking information from being shared outside of your organization.

 

Any surprises in that list? Did it change your perception of Office 365? Can you see a use for any of the new features within your own organization?

 

One thing’s for sure, this is just a taste of how agile Microsoft is being with this product suite. We’re going to see many more new announcements over the next 12 months and the Office Blog is one of the best places to keep informed.

 

-SCuff

I'm really excited to be heading out to Columbus, Ohio for DevOps Days on November 18 and 19. In fact, SolarWinds is proud to be a gold sponsor of the event this year. Which means we have a few spare tickets for the event.

 

SO... if you want to come hang out for a couple of days with the coolest DevOps people in the MidWest, all you need is a selfie!

 

Post a picture of you using your favorite SolarWinds tool down in the comments to this message (not in a new thread, not on some other conversation. here in this message). The first 3 responses win themselves a ticket.

 

Remember, this is YOU using YOUR COPY of a SolarWinds tool. You aren't going to win by pulling up the demo site or with fancy Photoshop hacks. (I won't be fooled, but I will DEFINITELY be amused).

 

Good luck, and remember: pics or it never happened!

20151105_150104.jpg

If you missed Part One of this series, you can find it here.

 

If you’re not prepared for the future of networking, you’re already behind.

 

That may sound harsh, but it’s true. Given the speed at which technology evolves compared to the rate most of us typically evolve in terms of our skillsets, there’s no time to waste in preparing ourselves to manage and monitor the networks of tomorrow. Yes, this is a bit of a daunting proposition considering the fact that some of us are still trying to catch up with today’s essentials of network monitoring and management, but the reality is that they’re not really mutually exclusive, are they?

 

In part one this series, I outlined how the networks of today have evolved from those of yesteryear, and what today’s new essentials of network monitoring and management are as a consequence. By paying careful attention, you will likely have picked up on ways the lessons from the past that I described helped shape those new essentials.

 

Similarly, today’s essentials will help shape those of tomorrow. Thus, as I said, getting better at leveraging today’s essentials of network monitoring and managing is not mutually exclusive from preparing for the networks of tomorrow.

 

Before delving into what the next generation of network monitoring and management will look like, it’s important to first explore what the next generation of networking will look like.

 

On the Horizon

 

Above all else, one thing is for certain: We networking professionals should expect tomorrow’s technology to create more complex networks resulting in even more complex problems to solve. With that in mind, here are the top networking trends that are likely to shape the networks of the future:

 

Networks growing in all directions

Fitbits, tablets, phablets and applications galore. The explosion of IoT, BYOD, BYOA and BYO-everything else is upon us. With this trend still in its infancy, the future of connected devices and applications will be not only about the quantity of connected devices, but also the quality of their connections tunneling network bandwidth.

 

But it goes beyond the (seeming) “toys” that users bring into the environment. More and more, commodity devices such as HVAC infrastructure, environmental systems such as lighting, security devices, and more all use bandwidth (cellular or wifi) to communicate outbound and receive updates/instructions inbound. Companies are using (or planning the use of) IoT devices to track product, employees, and equipment.

 

The explosion of devices which consume or produce data WILL, not might, create a potentially disruptive explosion in bandwidth consumption, security concerns, and monitoring and management requirements.

 

IPv6 Now… or sooner

ARIN reports that they have now depleted their IPv4 Free Pool. Meanwhile, IPv6 is enabled by default, and, therefore, is creating challenges for IT professionals—even if they put off their own IPv6 decisions. (Check out this article on VPNs’ insecurity and another on how to mitigate IPv6 attack attempts.)The upshot of all this is that IPv6 is a reality today. You need to learn about it, and be ready for the inevitable moment when switching over is no longer an option, but a requirement.

 

SDN and NFV and IPv6 will become the mainstream

Software defined networking (SDN) and network function virtualization (NFV) are just in their infancy and should be expected to become mainstream in the next five to seven years. With SDN and virtualization creating new opportunities for hybrid infrastructure, a serious look at adoption of these technologies is becoming more and more important.

 

So long WAN Optimization, Hello ISPs

There are a number of reasons WAN technology is and will be kicked to the curb in greater fervency. With bandwidth increases outpacing CPU and custom hardware’s ability to perform deep inspection and optimization, and with ISPs helping to circumvent the cost and complexities associated with WAN accelerators, WAN optimization will only see the light of tomorrow in unique use cases where the rewards outweigh the risks. As most of us will admit, WAN accelerators are expensive and complicated, making ISPs more and more attractive. Their future living inside our networks is certainly bright.

 

Farewell L4 Firewalling

With the mass of applications and services moving towards web-based deployment, using Layer 4 (L4) firewalls to block these services entirely will not be tolerated. A firewall incapable of performing deep packet analysis and understanding the nature of the traffic at the Layer 7 (L7), or the application layer, will not satisfy the level of granularity and flexibility that most network administrators should offer their users. On this front, change is clearly inevitable for us network professionals, whether it means added network complexity and adapting to new infrastructures or simply letting withering technologies go.

 

Preparing to Manage the Networks of Tomorrow 

 

So, what can we do to prepare to monitor and manage the networks of tomorrow? Consider the following:

 

Understand the “who, what, why and where” of IoT, BYOD and BYOA

Connected devices cannot be ignored. According to 451 Research, mobile Internet of Things (IoT) and Machine-to-Machine (M2M) connections will increase to 908 million in just five years, this compared to 252 million just last year. This staggering statistic should prompt you to start creating a plan of action on how you will manage nearly four times the number of devices infiltrating your networks today.

 

Your strategy can either aim to manage these devices within the network or set an organizational policy to regulate traffic altogether. Nonprofit IT trade association CompTIA noted in a recent survey, many companies are trying to implement partial and even zero BYOD policies to regulate security and bandwidth issues. Even though policies may seem like an easy fix, curbing all of tomorrow’s BYOD/BYOA is nearly impossible. As such, you will have to understand your network device traffic in incremental metrics in order to optimize and secure them. Even more so, you will need to understand network segments that aren’t even in your direct control, like the tablets, phablets and Fitbits, to properly isolate issues.

 

Know the ins and outs of the new mainstream

As stated earlier, SDN, NFV and IPv6 will become the new mainstream. We can start preparing for these technologies’ future takeovers by taking a hybrid approach to our infrastructures today. This will put us ahead of the game with an understanding of how these technologies work, the new complexities they create and how they will ultimately affect configuration management and troubleshooting ahead of mainstream deployment.

 

Start Comparison Shopping Now

Going through the exercise of evaluating ISP’s, virtualized network options and other on-the-horizon technologies – even if you don’t intend to switch right now –because it will help you nail down your particular requirements. Sometimes knowing a vendor has or works with technology you don’t need now, such as IPv6, but might later can and should influence your decision.

 

Brick In, Brick Out

Taking on new technologies can feel overwhelming to those of us with “boots on the ground,” because often the new tech becomes one more mouth to feed, so to speak. As much as possible, look for ways that the new additions will not just enhance, but replace the old guard. Maybe your new real-time deep packet inspection won’t completely replace L4 firewalls, but if it can reduce them significantly – while at the same time increasing insight and the ability to respond intelligently to issues – then the net result should be a better day for you. If you don’t do this, then more times than not, new technology will indeed simply seem to increase workload and do little else. This is also a great measuring stick to identify new technologies whose time may not yet have truly come, at least not for your organization.

 

At a more basic layer, if you have to replace 3 broken devices and you realize that the newer equipment is far more manageable or has more useful features, consider replacing the entire fleet of old technology even if it hasn’t fallen apart yet. The benefits of consistency often far outweigh the initial pain of sticker shock.

 

To conclude this series, my opening statement from part one merits repeating: learn from the past, live in the present and prepare for the future. The evolution of networking waits for no one. Don’t be left behind.

Learn from the past, live in the present and prepare for the future.

While this may sound like it belongs hanging on a high school guidance counselor’s wall, they are words to live by, especially in IT. They apply perhaps to no other infrastructure element better than the network. After all, the network has long been a foundational building block of IT, it’s even more important today than it was in the days of SAGE and ARPANET. Its importance will only continue to grow in the future while simultaneously becoming more complex.

For those of us charged with maintaining the network, it’s valuable to take a step back and examine the evolution of the network. Doing so helps us take an inventory of lessons learned—or the lessons we should have learned; determine what today’s essentials of monitoring and managing networks are; and finally, turn an eye to the future to begin preparing now for what’s on the horizon.

Learn from the Past

Think back to the time before the luxuries of Wi-Fi and the proliferation of virtualization, and before today’s wireless and cloud computing.

The network used to be defined by a mostly wired, physical entity controlled by routers and switches. Business connections were based on T1 and ISDN, and Internet connectivity was always backhauled through the data center. Each network device was a piece of company-owned hardware, and applications operated on well-defined ports and protocols. VoIP was used infrequently, and anywhere connectivity—if even a thing—was provided by the low-quality bandwidth of cell-based Internet access.

With this yesteryear in mind, consider the following lessons we all (should) have learned that still apply today:

It Has to Work

Where better to start than with a throw back to IEEE RFC1925, “The Twelve Networking Truths”? It’s just as true today as it was in 1996—if your network doesn’t actually work, then all the fancy hardware is for naught. Anything that impacts the ability of your network to work should be suspect.

The Shortest Distance Between Two Points is Still a Straight Line

Wired or wireless and MPLS, EIGRP or OSPF, your job as a network engineer is still fundamentally to create the conditions where the distance between the provider of information, usually a server, and the consumer of that information, usually a PC, is as near to a straight line as possible. When you forget that but still get caught up in quality of service maps, automated functions and fault-tolerance, you’ve lost your way.

An Unconfigured Switch is Better than the Wizard

It was a long-standing truth that running the configuration wizard on a switch was the fastest way to break it, whereas just unboxing and plugging it in would work fine. Wizards are a fantastic convenience and come in all forms, but if you don’t know what the wizard is making convenient, you are heading for trouble.

What is Not Explicitly Permitted is Forbidden

No, this policy is not fun and it won’t make you popular. And it will actually create work for you on an ongoing basis. But there is honestly no other way to run your network. If espousing this policy will get you fired, then the truth is you’re going to get fired one way or the other. You might as well be able to pack your self-respect and professional ethics into the box along with your potted fern and stapler when the shoe drops. Because otherwise that huge security breach is on you.

Live in the Present

Now let’s fast forward and consider the network of present day.

Wireless is becoming ubiquitous—it’s even overtaking wired networks in many instances—and the number of devices wirelessly connecting to the network is exploding (think Internet of Things). It doesn’t end there, though—networks are growing in all directions. Some network devices are even virtualized, resulting in a complex amalgam of the physical, the virtual and the Internet. Business connections are DSL/cable and Ethernet services, and increased use of cloud services is stretching Internet capacity at remote sites, not to mention opening security and policy issues since it’s not all backhauled through the data center. BYOD, BYOA, tablets and smartphones are prevalent and are creating bandwidth capacity and security issues. Application visibility based on port and protocol is largely impossible due to applications tunneling via HTTP/HTTPS. VOIP is common, also imposing higher demands on network bandwidth, and LTE provides high-quality anywhere connectivity.          

Are you nostalgic for the days of networking yore yet? The complexity of today’s networking environment underscores that while lessons of the past are still important, a new set of network monitoring and management essentials is necessary to meet the challenges of today’s network administration head on. These new essentials include:

Network Mapping

While perhaps a bit back-to-basics and also suitable as a lesson we all should have learned by now, when you consider the complexity of today’s networks and network traffic, network mapping and the subsequent understanding of management and monitoring needs has never been more essential than it is today. Moving ahead without a plan—without knowing the reality on the ground—is a sure way to make the wrong choices in terms of network monitoring based on assumptions and guesswork.

Wireless Management

The growth of wireless networks presents new problems, such as ensuring adequate signal strength and that the proliferation of devices and their physical mobility—potentially hundreds of thousands of network-connected devices, few of which are stationary and many of which may not be owned by the company (BYOD)—doesn’t get out of hand. What’s needed are tools such as wireless heat maps, user device tracking, over-subscribed access points and tracking and managing device IP addresses.

Application Firewalls

When it comes to surviving the Internet of Things, you first must understand that all of the “things” connect to the cloud. Because they’re not coordinating with a controller on the LAN, each device incurs a full conversation load, burdening the WAN and every element in a network. And worse, many of these devices prefer IPv6, meaning you’ll have more pressure to dual-stack all of those components. Application firewalls can untangle device conversations, get IP address management under control and help prepare for IPv6. They can also classify and segment device traffic; implement effective quality of service to ensure that critical business traffic has headroom; and of course, monitor flow.

Capacity Planning

Nobody plans for not growing; it’s just that sometimes infrastructure doesn’t read the plan we’ve so carefully laid out. You need to integrate capacity for forecasting tools, configuration management and web-based reporting to be able to predict scale and growth. There’s the oft-quoted statistic that 70 percent of network outages come from unexpected network configuration changes. Admins have to avoid the Jurassic Park effect—unexpected, but what in hindsight were clearly predictable outages is the bane of any IT manager’s existence. “How did we not know and respond to this?” is a question nobody wants to have to answer.

Application Performance Insight

Many network engineers have complained that the network would be stable if it weren’t for the end users. While it’s an amusing thought, it ignores the universal truth of IT—everything we do is because of and for end-users. The whole point of having a network is to run the business applications end-users need to do their jobs on. Face it, applications are king. Technologies such as deep packet inspection, or packet-level analysis, can help you ensure the network is not the source of application performance problems.

Prepare for the Future

Now that we’ve covered the evolution of the network from past to present—and identified lessons we can learn from the network of yesterday and what the new essentials of monitoring and managing today’s network are—we can prepare for the future. So, stay tuned for part two in this series to explore what the future holds for the evolution of the network.

(COMING SOON) Read “Blueprint: Evolution of the Network - Part Two.

kong.yang

IT's a MAAD World

Posted by kong.yang Employee Nov 3, 2015

This post originally appeared on SolarWinds Content HUB.

 

All around me are familiar faces

Worn out places, worn out faces

Bright and early for the daily races

Going nowhere, going nowhere...

 

And I find it kind of funny

I find it kind of sad

The dreams in which I'm dying are the best I've ever had

I find it hard to tell you,

I find it hard to take

When people run in circles it's a very, very

Mad world, mad world


AWS re:Invent 2015 reminds me of the lyrics from Roland Orzabal’s Mad World. The first verse is represented by traditional Enterprise IT as it struggles to transform and enable continuous service delivery and continuous service integration. The second verse encompasses the conversation that IT operations is having with itself to remove the tech inertia and adopt the DevOps culture as well as the conversation it is having with developers as IT professionals try to learn and live agile and lean.


The disruption from highly available, easy-to-use and easy-to-scale cloud services is making IT organizations run in circles to change itself all the while trying to harness that change into business value. It’s like IT is becoming a mad world; but it doesn’t have to be, as long your maad and not literally mad. Whether you are an IT professional, a DevOps engineer, or an application developer—you can never be MAAD enough in this mad world, this age of instant-applications. And by MAAD, I mean monitoring as a discipline.


So why leverage monitoring as a discipline in the age of instant-apps? Solarwinds Developer Evangelist, Dave Josephsen, said it best in his IT Briefcase article“Teams with the know-how to embrace metrics-driven development and scale their monitoring into their codebase, will spend less time mired…and more time building and running world-class systems that scale.” But not so fast you say, because you’re in IT ops and not a developer. Okay, no problem. As my friend and fellow SolarWinds Head Geek, Thomas LaRock, so eloquently puts it, you need to learn to pivot. And when you do, embrace the discipline that you’ve already matured your career with—monitoring.


Monitoring is the ideal discipline to bridge the gap from your premises to your clouds at your scale. I think of monitoring as a set of eight skills:

  1. Discovery – show me what’s going on.
  2. Alerting – tell me when something broke or is going bad.
  3. Remediation – fix the problem.
  4. Troubleshooting – find the root-cause.
  5. Security – govern and control the data, app, and stack planes.
  6. Optimization – run more efficiently and effectively.
  7. Automation – scale it.
  8. Reporting – show and tell to the management teams/business units.

 

The first four skills (DART framework) are covered in detail in a SolarWinds eBook that focused on virtualization. The last four skills will be covered in another SolarWinds eBook later this year or early next year. These skills apply to any IT professional, especially one looking to enable hybrid IT service models. Below is a figure of the DART framework:

dart.png

 

Traditional IT organizations are embracing transformation, as evident by AWS’s continued simplification of cloud services for Enterprises to consume. Many organizations still face resistance internally to change and the rate of change associated with continuous delivery and continuous integration. At the same time, the disdain for IT professionals from the DevOps purist at THE cloud conference is still palatable. Some of it may be deserved for the years of IT roadblocks in the guise of rigor and discipline. Whatever the case, continuous service delivery and continuous service integration are the new realities for Enterprise IT. Dev is the new black.


So IT professionals, take ownership of your premises, your clouds, and your scale with monitoring as a discipline. It’s definitely not all quiet on the cloudy fronts. The storms of continuous change are brewing and IT professionals need to stay ahead of the game. If you’re in the calm, the storm is already upon your organization and disruption is about to be forced upon you.


I’ll end with words from a highly distinguished monitoring engineer who’s always on the leading edge of tech, Adrian Cockcroft. Adrian says that the CIO (and in turn their IT professionals) has three key goals:

  1. Align IT with the business
  2. Develop products faster
  3. Try not to get breached


That all three goals can be achieved with monitoring as a discipline is just utter maad-ness!

Is it possible for monitoring of your servers to be really effective? Or have they been configured in a way that is just white noise that you have come to ignore?  Server monitoring is imperative to ensuring that your organization functions optimally, and minimizes the number of unanticipated outages.

 

Monitoring Project Gone Wrong

Many years ago when I started with a new company, I was handed a corporate “flip phone”.  This phone was also my pager. When I was on-call for the first time I was expecting that I was going to only be alerted when there was an issue.  WRONG!  I was alerted for every little thing day and night.  When I wasn’t the primary point person on-call I quickly learned to ignore my device, and when I was on-call I was guaranteed to get some form of illness before the end of the week.  I was worn down from checking every little message on my pager all night long. Being the new member of the team, I first observed, but soon enough became enough.  Something had to change; so we met as a team to figure out what we could do.  We were all ready for some real and useful alerting.

 

Corrective Measures

When monitoring has gone wrong, and the server monitoring needs to change what can be done?  Based upon that incident it became very important to pull together a small team to spearhead the initiative and get the job done right.

 

Here is a set of recommendations on how monitoring configured wrong could be turned into monitoring done right.


  • Determine which areas of server monitoring are most important to infrastructure success and then remove the remaining unnecessary monitoring.  For example, key areas to monitor would be disk space free, CPU, memory, network traffic, and core server services.
  • Evaluate your thresholds in those areas defined as primary, and modify the thresholds according to your environment.  Often times the defaults setup in monitoring tools can be used as guidelines, but usually need modification for your infrastructure.  Even the fact that a server is physical or virtual can change the thresholds required for monitoring.
  • Once evaluation is complete, adjust the thresholds for these settings according the needs of your organization.
  • Stop and evaluate what is left after these settings were adjusted.
  • Repeat the process until alerting is clean and only occurs when something is deemed necessary.

 

As the process is repeated, the exceptions will stand out more and can be implemented more easily.  Exceptions can come in the form of resources spiking during overnight backups, some applications inherently requiring exceptions due to their nature of memory usage (e.g. SQL or Microsoft Exchange), or as simple as monitoring of different server services depending on the installed application.  Continual refinement and repetition of the process ensure that your 3am infrastructure pages are real and require attention.


Concluding Thoughts

Server monitoring isn’t one size fits all and these projects are often large and time consuming.  Environment stability is critical to business success.  Poorly implemented server monitoring does impact the reputation of IT, so spending the appropriate amount of time ensuring the stability of your infrastructure becomes priceless.

scuff

Your new Microsoft Office

Posted by scuff Nov 2, 2015

Continuing my series about change, let’s look at the newest Office suite – Office 2016.


It’s the second major version of Office to use the ‘click to run’ streaming deployment methodology. More importantly, it’s the first major version of Office to really ‘catch up’ with what Microsoft has been doing in the Cloud. That’s a very important point. With Microsoft totally controlling the infrastructure within Office 365 and therefore the component versions, it can release new features in the Cloud first without worrying about backwards compatibility. Office 2016 (along with Exchange 2016) are the first real ‘retro fit’ to bring Cloud capabilities to the desktop/on premises server.


Not that they’ve totally succeeded, as many new Office 2016 features are more ‘Cloud integration’ than they are ‘Cloud but on your PC’. For example, in Office 2013 we saw OneDrive and SharePoint Online included as places to open from or to save to. Office 2016 now includes a new, easy to find ‘Share’ button (in Word, PowerPoint & Excel), which sets up sharing permissions and sends an email invitation in one step from within your document. But the catch is that the document HAS to already be located in the Cloud. If you’ve just shivered with fear at the thought of your users sharing any business document outside of your organization, there are administrative controls that cover external sharing. In addition, OneDrive and SharePoint Online now can take advantage of Data Loss Prevention, which makes certain information within documents ‘unshareable’ (eg credit card numbers) based on your corporate policy.

 

Word and PowerPoint are the first desktop programs to allow real-time co-authoring. If you’ve ever been frustrated playing ‘attachment tennis’ (aka who has the correct version), this feature is a magical sanity saver. But … wait for it … again, only available if the document is in a Cloud service (OneDrive or SharePoint Online).

 

Outlook gets less Cluttered, with a new feature that’s only available with an Office 365 subscription. It’s like the Junk folder but for things you might actually want to read occasionally and it learns which emails you tend to only glance at and delete. It doesn’t sound like much, but after using it for a few months it’s improved my productivity and my Inbox triaging.

 

So what’s in it for traditional, non-Cloud environments?

  • Updated Office themes and new chart types! Yes, I know, you can stop celebrating now.
  • One click forecasting for Excel lovers projects data forward based on historical information.
  • Tell me more’ sits quietly along the menu bar but it’s is actually a more useful way of getting help on Office tasks and features and is worth a look.
  • Smart Lookup brings the power of Bing search (stop laughing now) into Office, delivering Wikipedia entries and Internet search results to the Insights pane as well as definitions and synonyms from the Oxford Dictionary.  Just right-click a word or phrase and select Smart Lookup.
  • By far my most favourite new feature is an update to Outlook’s file attachment dialog.  Clicking to attach a file now automatically lists your Recent Items without having to navigate file explorer. This includes things like screenshots you have recently snipped & saved. It also asks if you want to save Cloud files as a link to the Cloud (and sets permissions) or as an attached copy of the file.

 

 

So, without the Cloud features, there’s not really anything in there that’s going to make you rush out and upgrade, especially when even Office 2010 is in extended support until October 2020.  

 

If you are looking to deploy or upgrade an Office 365 ProPlus flavour, the Office Deployment Tool (ODT) will become your new best friend (albeit a slightly annoying one who sulks if you don’t have everything perfect). The tool brings down an installation package once and an easily modified configuration script sets the scene for running it on your workstations. This can include or exclude apps and you can have a different configuration file for the HR department or your project managers (depending on what file you use when you launch their installation). The most frustrating thing about this tool is a generic ‘There’s something wrong – check your permissions and your internet access’ error what can mean:

  • You don’t have local admin rights
  • User Account Control is blocking the installations
  • The security permissions on source files are wrong
  • The security permissions on source share are wrong
  • It’d prefer a mapped network drive instead of a UAC path
  • Insert any other variable reason here that I haven’t come across yet, which will likely use the same error code.

 

 

Office 365 ProPlus will support a Branch update methodology similar to Windows 10 but it omits the Long Term Serving Branch. Using ODT, you can segregate your installations to apply updates for the Current Branch (monthly), Current Branch for Business (every four months) or stay on the cutting edge with First Release for Current Branch for Business (every four months). That lets you control who gets the updates first, to allow for some real world use and testing before you let loose on the rest of your organisation.

 

So, how are things looking in your Office?

Are you locked in to Office 2007 because a line of business application runs it?

Are you planning your Office 2016 rollout or have you tested it at home?

 

-SCuff

Another helpful tip from someone who does this for a living

By Corey Adler (ironman84), professional software developer

 

Greetings again, thwack®. I hope my last post didn’t scare you away from learning how to code properly. What do you mean you haven’t seen it yet? Click here now!

 

When I began talking to my good buddy and Head Geek™ Leon Adato about more tips for you, the code novice, he stopped me right after I mentioned my first idea. “You know Corey, that’s a great topic,” he said. “You should dedicate an entire post to discussing it, not just a small paragraph.” Sigh. Fine. Here goes nothing.

 

Integrated development environments (IDEs) are your best friend

An integrated development environment (IDE) is an application that facilitates coding in any number of programming languages. IDEs tend to come with a variety of features, including a source code editor, intelligent code completion, compilers/interpreters (depending on the language), and the ability to debug the code as it runs.

 

In college, I knew a computer science professor who would not let her intro students use one of the better featured IDEs, such as Eclipse or NetBeans®, for her class. Instead, they had to use one that was nothing more than a text editor that could interpret Java™ programs. She did this because she wanted her students to learn the language syntax, to create class files without all the bells and whistles that would keep them from learning finger memory when they did. My reaction to her approach now is the same as it was back then, only more amplified: I think that’s an incredibly ridiculous way to teach programming – Java or otherwise.

 

Why? Because you aren’t going to learn the language any faster by breaking your teeth on it. All you’ll end up doing is frustrating yourself to no end. Professional software developers use full-featured IDEs all the freaking time. I’m a .NET developer who keeps a copy of Visual Studio® 2013 Ultimate on my work laptop, along with a bevy of extensions on top of it, including the popular ReSharper extension, which adds even more keyboard shortcuts and code completion features. It even gives me advice about good coding practices that I may have missed while coding.

 

It’s true. I already know the language. I’m not a beginner who should do it the old fashioned way to learn it. While that may be true, I’ve learned more about coding and languages from the helpers in my IDE than I ever did in class. For example, let’s talk about that whole intelligent code completion business. When I instantiate a variable (more on that in a different post) of a certain type, Visual Studio will tell me every different function and property that I can access on that variable. If they’re a baked-in type, I even get documentation about what each of them do! Why the heck would someone not want that? As previously suggested, don’t try to reinvent the wheel, which includes not avoiding the use of a full-featured IDE.

 

You know what else is cool about IDEs? The fact that they exist for pretty much any programming language in the world. I did a simple Google® search for a list, and found the following Wikipedia article that includes a huge list:

https://en.wikipedia.org/wiki/Comparison_of_integrated_development_environments.

 

Not all of them are free, but most of the paid ones will still have a free version with some features. Working on .NET? (Hey! Me too! REPRESENT!) Download Visual Studio Community. How about with Java? Use the aforementioned Eclipse. It’s fantastic and open-sourced! Or maybe you’re one of those unfortunate souls who use Perl® and PHP (heaven help us all. Yes, Leon, I’m looking at YOU). In that case, may I highly recommend using NetBeans, which is free under GPL licensing! With so many options and features to make your coding journey easier, why would you ever choose to not use one?

 

Maybe Leon was right. This topic did deserve its own post.  Until next time, I wish you good coding!

This post is the result of an idea from and is co-authored by gerardodada, VP of Product Marketing.


What is an IT Ninja? IT ninjas eliminate issues and incidents in data centers with their unique set of skills specifically remediation and troubleshooting. They are out of sight, out of mind, and only called on when the mission calls for immediate resolution. IT ninjas are perfect for dealing with rogues and shadows found in the IT ecosystem with their myriad of tools like the ninja star. Ninja stars enable agility, travel lean, and provide security. However , they require skill and experience to use effectively and efficiently.


Deconstructing the IT ninja star

The ninja star can be thought of as a triangle that intersects with an inverted triangle. This also happens to be a perfect representation of what IT organizations must deal with in order to succeed in their IT transformation.


Think of the upright triangle and the area at each cross section of the triangle as time spent on tasks.  The base is urgent tasks i.e. a system is broken or an application is down. As you ascend the triangle, there are more gains to be had in overall value to the organization, but there is also less time allotted to do so.


Next, think of the inverted triangle as representing impact to overall business. Tasks such as incorporating best practice policies provide the best value for differentiated lifting of business objectives while fixing broken things just keeps the needle where it is.


The intersection of the two triangles in the ninja star represents the best use of time and value add creation. It’s a balanced approach to keeping the lights on while moving forward in enabling practices that can create disruptive innovation for the business.


IT-Ninja-Star.PNG

Figure 1: IT Ninja Star

 

Reactional to transformational

And for the benefit of the business, Enterprise IT organizations are embracing transformation to extract disruptive innovation and value added differentiation from their applications and intellectual property. Chatter from cloud and virtualization conferences like AWS re:Invent and VMworld remind us that most IT departments are still mired in the reactional culture of keeping the lights on. In turn, they are struggling to fully embrace the DevOps culture.


The struggle starts with the process and inertia that made IT rigor and discipline the dependable stalwarts in the times of crisis. Now that same process is called outdated and a time sink. The goals of IT are shifting to creating value from the important tasks i.e. implementing best practice policies versus fixing stuff that breaks i.e. keeping the lights on.


IT professionals tend to spend most of their time on these urgent “keep-the-lights-on” things, such as recovering from a disaster incident, maintenance, or dealing with “my app is slow” tickets. Time that should be spent on disrupting is instead spent on just keeping afloat.


How to succeed in transformational IT

Transformational IT organizations are embracing the DevOps culture, one that strives for continuous delivery and continuous integration. The ones that have been successful are using a tried and true, tri-modal method shared by Simon Wardsley - Pioneers, Settlers and Town.  They recognize the importance and value of each persona to glean from best practice policy and rigor from one another. By embracing these best practices and policies along with best-in-class monitoring tools that cross each stage, they can gain scale and be agile in their application implementations. And ultimately, do what they do best as IT ninjas.


Closing remarks

It’s high time that IT professionals become IT ninjas and unleash their ninja stars to transform their organizations. A transformation that will allow reactionary teams with a high internal tech inertia to being innovatively disruptive teams with frictionless delivery and integration from the application to business utility.


What say you?

As we approach the end of National Cyber Security Awareness Month, it’s time to focus on ways to improve your current staff and resources. In light of our country’s current security skills shortage (more than 50 percent of 600+ companies surveyed indicated that it takes roughly three to six months to fill cyber security positions, and even then, available staff may not have the necessary skills to detect and respond to complex incidents[1]), organizations must explore ways to optimize IT and security team functions. Too often a lack of coordination between teams leads to increased inefficiency and wasted effort.

 

If you don’t have an efficient, streamlined patch management program in place, for example, work done in vulnerability assessment (VA) could result in a pile of unread spreadsheets. VA programs are expensive to set up and manage, and usually involve a monthly cost. This means that any month the data isn’t used will wind up being a waste of time, money, and resources. If your IT team is not ready to manage VA, consider having your security team work with them to set up good patch management tools and practices.

 

Sometimes different functional teams want to access data from the same sources. In other cases, data from devices being managed by different teams may not reach its desired destination. Both instances call for monitoring. Take, for example, switches and routers vs. ingress/egress devices on the network. Traditionally, ingress/egress (firewalls) are configured and managed by the security team, and internal switches and routers are managed by the networking team. However, each team would benefit from sharing information. Perhaps there should be internal firewalling between organizational teams: finance and human resources, sales and marketing, engineering and product management. If these internal firewalls are being implemented with access control lists on internal systems, does the networking team configure and manage these devices, or does security? 

 

Another area of best practices sharing could come from change management. In many organizations, change management is either overlooked, or not practiced consistently across teams. Look inside your organization and see which team has more maturity in process, tools, and efficiency for change management. This might be the applications team, the IT team, the networking team, the security team, or maybe even DevOps. Setting up best practices leads across functional groups encourages communication, creates a culture of cooperation rather than antagonism, and helps mitigate staff shortages.

 

A 2012 Chicago School survey of job satisfaction[2] indicates that an important component of job satisfaction comes from being recognized for using inherent skills and abilities. In cross-functional teams, employees are encouraged to share their skills and abilities with a broader audience, which leads to improved processes and greater job satisfaction. 

 

As Henry Ford stated, “Coming together is a beginning. Keeping together is progress. Working together is success.”



[1] http://thehill.com/blogs/congress-blog/technology/239113-cybersecurity-talent-worse-than-a-skills-shortage-its-a

[2] http://psychology.thechicagoschool.edu/resource/industrial-organizational/determinants-of-job-satisfaction-in-the-workplace

scuff

Windows 10 Brings Us The Future

Posted by scuff Oct 27, 2015

Last week, the Internet was awash with celebrations of Back to the Future day – that date in the second film that Doc Brown & Marty McFly travelled to from 1985. Sadly, we still don’t have hoverboards. But the future is now the past.

 

That sentence rings true in the IT industry, with an accelerated pace of product developments happening while you’re keeping the network, servers and backups running. We’ve talked about how you keep up with all the changes in Technology has changed – have you? https://thwack.solarwinds.com/community/solarwinds-community/geek-speak_tht/blog/2015/06/05/technology-has-changed-have-you

 

Now let’s look at some of the Microsoft technology that’s changed in the last 12 months and what this means for Sys Admins ... starting with Windows 10.

 

At the last PR opportunity, Microsoft announced that Windows 10 was now on 110 million devices, averaging 1.6 million installations per day since its release. This includes 8 million business PCs, so 102 million of those upgraded PCs were consumer devices. That leaves significant room for growth in the business PCs number. And you know exactly why. Most Enterprises are not yet ready to roll out Windows 10. Heck, some of them are still getting rid of Windows Server 2003.

 

If your organization is running Windows 7, it has ‘extended support’ until Jan 14, 2020. http://windows.microsoft.com/en-au/windows/lifecycle This means you’ll still get security patches but you won’t get product updates, and there were a ton of them in Windows 10, including:

  • Cortana on the desktop: searching apps you have installed, apps you could install, documents you can access and results from the web & other services that integrate with Cortana.
  • Action center and notifications, similar to your phone. ‘Quiet hours’ is my favorite, suppressing all app notifications until I turn them on again.
  • Edge browser: Banishes all the browser plugins, but Enterprise admins can configure specific sites to still open with IE for backwards compatibility.
  • Multiple desktops: Lets you group applications on a virtual desktop, so you can switch between projects, or share one desktop during a conference call while the other has the apps you don’t want to share.
  • Windows Hello & Windows Passport: Bringing biometric sign-in support (face, iris or fingerprint) to newer hardware devices, or a 4 digit PIN for older hardware. Is that the end of the forgotten password?
  • .. and you get to keep your Start menu, which now also supports tiles!

 

Enterprise Edition sys admins can look forward to:

 

The Windows Store for Business is available for users with an Azure AD account. This gives staff access to install corporate apps or third party approved apps that are licensed at a business level, from within the Windows Store without entering their personal credit card.

 

For sys admins, Windows 10 has changed the upgrade process, which is big news for those of us that prefer a ‘wipe and reload’ strategy. The new ‘In-Place Upgrade’ actually automates all the things we’d do with a wipe and reload. It captures the data & settings, moves the existing OS, installs the new OS image and restores the data & settings. This can be managed with System Center Configuration Manager or the Microsoft Deployment Toolkit. If you don’t use SCCM within your organization, I highly recommend you take a look at the MDT.  https://technet.microsoft.com/library/mt280162.aspx  http://www.scconfigmgr.com/2015/10/24/create-a-windows-10-enterprise-reference-image-with-mdt-2013-update-1/

http://www.systemcenterdudes.com/managing-windows-10-with-sccm-2012/

 

Windows 10 also changes how we receive & roll out operating system updates. Known as ‘Windows as a Service’, it’s not a subscription model but a delivery model of when updates will be rolled out and who will get them. Microsoft slices the market into Consumer (Current Branch), Business (Current Branch for business) and Specialized systems (Long term service branch). Current Branch for Business are environments controlled by WSUS, MDM or Configuration Manager and allow you to split your organization’s devices into 4 ‘rings’ over a period of 8 months for a staggered update deployment.  https://technet.microsoft.com/en-us/library/mt574263(v=vs.85).aspx

Check out Michael Beck’s Ignite session: Windows as a Service: What does it mean for your business? https://channel9.msdn.com/Events/Ignite/2015/BRK2322

In conclusion, the significant amount of changes to Windows 10 means more learning and planning for sys admins than your last operating system upgrade, but it might be your last big learning curve. Has your organization started rolling out Windows 10 yet? Do you have it in your test lab? What’s your biggest challenge with adopting the new operating system?

 

-Scuff

 

P.S. Infrastructure Technical Evangelist Simon May has put together 15+ killer Windows 10 resources for IT admins http://simon-may.com/15-killer-windows-10-resources-for-it-admins/

In the nearly three decades I've worked in IT, there is a truth that trumps all others: More is better; smaller is better; and more-into-smaller is the best.

 

Whether your area of interest is storage, DevOps, virtualization, networking, app dev, or InfoSec, squeezing more into a smaller footprint (bits, lines of code, CPUs, etc.) is always better.

 

The ultimate expression of this is the Hollywood fantasy hacker. This is the computing wizard who steps onto the scene with little more than an average looking laptop, flips it open, and immediately begins bending the systems in question to their will. This magical notebook apparently is outfitted with infinite disk, thousands of CPUs, which can process millions of instructions per second, and connectivity that would put Google® Fiber to shame. If needed, it can spawn dozens of guest instances and mimic routers, load balancers, and firewalls.

 

Of course, that's just a fantasy. But it's one borne of a very real desire in the hearts and minds of IT professionals all over the world. The ability to create a realistic virtual simulation of a real computing environment, whether for testing, modeling, historical reference, or hacki... I mean penetration testing, is a very real need that often goes unmet in businesses today.

 

I bring this up because each day we seem to be getting closer to making this a reality. With tools like VMware® and VirtualBox®, we can now carry an entire server farm in our knapsack (as long as we have enough CPU and RAM). But, until recently, the network still eluded those who wanted to create a realistic simulation of their environment, including servers and network devices.

 

GNS3 has been the go-to option for simulated networks for years, but the primary use was preparing for certification exams. Then, about a year ago, GNS3 introduced the ability to link virtual machines (desktops or servers) to those network devices.

 

Suddenly, it became possible to not only model the network, but the servers behind those networks as well. We could simulate Active Directory® traffic, watch Web requests traverse the network, hit the load balancers and return to the requesting PCs. We could even test out monitoring.

 

Obviously, it's that last item that caught my attention. With this new ability to simulate both networks and servers, you could set up a polling engine, monitor the whole shebang, and test out scenarios like parent-child logic, routing failures, detecting configuration changes, and more.

 

GNS3 even put together a quick-start guide to help users set up SolarWinds® NPM. But this guide overlooked some key challenges:

  • Many members of the SolarWinds community are unfamiliar with setting up networks from scratch. They inherit them, monitor them, and even troubleshoot issues. But setting it up from scratch is often outside their skill set.
  • Many members of the GNS3 community are unfamiliar with monitoring tools in general, and SolarWinds® NPM specifically. So the details of configuring the server, running discovery, and setting up alerts is outside their skill set.

This is why I took the plunge and created a guide that helps both groups.

 

You can find it here: https://thwack.solarwinds.com/docs/DOC-187171

 

It's really two guides in one. For those IT pros who are new to networking, the first half explains how to install GNS3 and then set up a simple three-router network. The second half is for network engineers who want to get SolarWinds NPM set up quickly without any of the false starts that comes with setting up a server application when you prefer to spend your day working with the bottom three layers of the OSI model.

 

Once you are set up, you will be well on your way to striding into the room, flipping open your magic laptop, and exclaiming, “My simulation shows we’ll be down to 27.3% network cohesion in just under 30 minutes!”

 

Then you can begin typing frantically, and save the day.

Filter Blog

By date:
By tag: