Skip navigation

Take a look at Google's doodle from June 23, 2014. How many of us have been guilty of this during the past couple of weeks?

world-cup-2014-27-5917140490125312-hp.gif

Chances are, many of your end-users are consuming bursts of network traffic to discreetly stream USA games during the FIFA World Cup (true confession – I am guilty as charged). CNET reported a record-breaking 1.7M concurrent viewers using the WatchESPN video-streaming services during the US v. Germany game last week, exceeding 6 Tbps when combined with viewers streaming the simultaneous Ghana v. Portugal game. The previous record was 3.5 Tbps for the US v. Canada men's hockey semifinal during the 2014 Winter Olympics. While colleagues and cohorts are coming together to cheer their country’s team on, IT professionals and upper management have some justifiable concerns.

 

Typically, business-critical applications or services such as SaaS applications, email, database, file sharing, storage, remote backup, and other revenue generating traffic accounts for around  70% of available bandwidth.  Video and other non-essential activity must duke it out for the remainder. This leaves little precious overhead, especially on your expensive WAN links, for hundreds of surprise pop-up HD streams spread by chat clients. Even with great QoS maps in place, instead of saying GOOOOOOAL!, your network admin yells NOOOOOOO!

 

While business-critical applications could be put at risk during World Cup games, President Teddy Goalsevelt and Will Ferrellhttp://bleacherreport.com/articles/2110321-will-ferrell-in-recife-for-usmnt-rally-offers-to-bite-every-germany-playerwould be devastated to hear that corporations are restricting employees from streaming the World Cup games altogether.

 

To help you manage this risk while still fulfilling your patriotic duty, we figured we could remind you of some best-practices when it comes to network performance management during popular live streaming events such as the World Cup:

 

  1. Monitor your current traffic profile
    There are numerous hardware and software tools that allow you to monitor and profile current network traffic to identify when and how bandwidth is being consumed, as well as by whom and by what applications. This way, you can gain real-time visibility into who and what is consuming your bandwidth and whether you need to take action.
  2. Establish traffic management or quality of service (QoS) policies
    Various network traffic management tools allow a business to establish QoS policies to ensure that non business applications are policed or throttled. They can also ensure business-critical traffic such as VoIP or those to your data center or cloud takes priority over non-essential traffic. If video streaming reaches a certain threshold, you can limit the network traffic it consumes in order to leave room for the business-critical applications.
  3. You may not have to add more bandwidth just yet
    Purchasing additional bandwidth is not the only way to increase your capacity. In addition to QoS, technologies such as WAN acceleration and optimization will effectively help improve your WAN performance.

 

So time to 'fess up: How many of you are watching the game at work? Are you worried about bandwidth consumption?

 

p.s. If you want to see just how much the FIFA World Cup is slowing your network traffic, you can download a free trial of Netflow Traffic Analyzer for real-time network bandwidth monitoring and traffic analysis.

Network outages due to configuration errors are common in a network. However, the readiness to tackle such instances and minimize network downtime depends on the administrator.

 

The critical need when an outage occurs is to identify, find, and correct configuration issues in a matter of minutes.  And, this is why SolarWinds Network Configuration Manager (NCM) integration with SolarWinds Network Performance Manager (NPM) is one of the compelling needs for disaster recovery preparedness. 


NPM-NCM Duo 2.png

NPM & NCM Duo Takes You From Problem to Resolution in Just 7 Clicks!



For example, consider the following scenario. NPM alerts you when a critical router interface is pegged at 90% utilization.


  1. You are in NCM where you instantly spot a config change alert (‘Config Change Notification’ must be pre-configured in NCM). 
  2. The ‘Main Node Details’ page reveals the exact problem that there was a config change made recently.
  3. ‘Compare Configs’ to check for changes between the ‘new’ and the ‘working’ config. Changes are highlighted in a block on the config code (changes could be - in the interface speed or QoS policy or anything else).
  4. Select the device from the ‘Configuration Management’ page to view configuration and node details.
  5. Click on the ‘Upload’ button to select the right configuration file revert changes made earlier
  6. Click ‘Upload’ to push the last-know-good config.
  7. Click ‘Transfer Status’ to view status of upload and problem solved!

 

Having all of your configurations stored, catalogued, and backed up allows you to recover from a hardware failure in minutes instead of hours of grueling work.  With NCM, it actually takes longer to physically rack and wire replacement network gear than it takes to get your device back to its pre-disaster status!


An NCM customer related the following story to us.


When we had a switch (Cisco Catalyst 4507R) fail in one of our larger offices it took down 240 ports on our network. This this included phones, computers, wireless devices, printers, etc. on an entire floor.  No wireless, no phones, no printing, no computer connections – nothing for an entire floor! It actually took us longer to remove and replace the physical hardware than it did for us to recover from our last known good configuration. Putting it down to numbers it’s something like this:


    • Time to drive replacement gear to office: 2 hrs
    • Time to remove and re-rack the replacement hardware: 1 hr
    • Time to get the switch back online with the proper configuration: 5 min
    • Time for the local team to plug all the network cables back into the switch: Not My Problem!


Integrated solutions are hard, but monitoring configuration changes and recording this information is extremely useful to determine if a configuration change is the cause of a network outage. So, instead of just collecting device configuration (backup) at a specific time each day, you can configure NCM to alert you every time there is a config change. This means you can immediately determine if a reported issue coincides with a configuration change and if it did, push the previous configuration to replace the new one and Bob’s your Uncle – the network is back!

 


In sum, you can’t always prevent network outages from happeningbut you can sure be proactive by following best practices and tackle incidents with least amount of network downtime. Take necessary steps for disaster recovery preparedness and when there is a network incident – you’re ready to roll!


What similar NPM/NCM stories do you have?  Please share them with us here!


philip sellers-1.jpg

For this month’s IT Blogger Spotlight, we caught up with Philip Sellers of Techazine.com. You can also find Philip on Twitter, where he goes by @pbsellers.

 

Here we go…

 

SWI: So Philip, how did this whole thing get started?

 

PS: I started blogging in 2008. I was inspired by other bloggers in the VMware community who were sharing a lot of great technical information—things that really helped me to learn and optimize my VMware environment. A co-worker pointed me to a couple blogs he’d been following and I started following them, too, and finding others for my RSS reader. I largely missed the mailing list and forum era of community collaboration, but these bloggers really were helping me in my daily job. I wanted to do the same for the community—sharing problems and solutions that I ran into on the job.

 

SWI: And from there Techazine.com was born? Tell me a little about it.

 

PS: My focus with Techazine is mostly enterprise IT and at that it’s mostly around VMware and Microsoft software solutions and HP hardware solutions. Those are the vendors I use daily and so I try to stick with what I know. I also write about management software around those ecosystems; things like PowerCLI and PowerShell, orchestration tools and monitoring tools. I enjoy writing about vSphere most of all as that’s where I have the deepest knowledge. I really believe in the vSphere platform and how it’s revolutionized our internal datacenter. I also enjoy writing about scripting and management solutions that make an admin’s life easier. And, although it’s not really enterprise, I do write a fair bit about Apple, including Macs and the iPhone and iPad.

 

SWI: So, vSphere is your favorite thing to write about. What tends to be your readers’ favorite posts?

 

PS: My most popular posts have been solutions to real-world issues. I tend to get the most comments and emails about those posts. They can range from optimization how-to’s to specific errors with fixes related to a product. I think these tend to be my most popular posts because they immediately help readers with something in their environment. I also seem to get a lot of traffic around my HP 3PAR storage posts. We’ve been early adopters of some of their technology and so I get to write about that.

 

It’s actually hard to predict what will be a popular post. Believe or or not, one of my all-time most popular posts is a review I did on the Redbox Instant streaming service when it launched. I think that one was lucky timing on my part; I just happened to post it when there was a lot of interest.

 

SWI: What takes up your time when you’re not blogging?

 

PS: I’m a full time systems administrator for a telephone cooperative in South Carolina. As a part of their internal IT team, our group is responsible for the infrastructure that the business operations run on—the servers, OS and middleware, networking and storage. I also get the chance to consult with our managed services team and help them design solutions for customers from time to time. I primarily focus on Windows servers and Microsoft applications and I'm the primary VMware administrator for the co-op.

 

My other full time gig is family. My wife and I have two young children, so most of our free time is spent chasing after them and going to their activities.

 

SWI: How did you get into IT in the first place?

 

PS: I guess my story started with a Tandy 1000. I started out at home with command line and DOS. As a student in high school, I really discovered my love for and aptitude with computers while helping teachers and fixing small problems on the school network—Netware, DOS and Windows 3.11.

 

At the same time, growing up on a farm, my dad taught me how to troubleshoot. Those troubleshooting skills I learned way back when are probably why I ended up as a systems administrator instead of a programmer or developer.

 

I then began working for a consulting company while taking classes in college and that was really my gateway into the industry. The consulting company gave me exposure to a lot of environments and software. I picked up a lot of skills during those first few years. It just seemed like a natural fit for me.

 

At the end of the day, I like solving problems and this field provides lots of opportunities to do that. Lots.

 

SWI: As an IT pro, what are your favorite tools of the trade?

 

PS: I really love PowerShell and PowerCLI. I’ve been busy over the last couple years rewriting a lot of our scripted processes in PowerShell and I’ve learned a lot. So, I’ve really gotten hooked on Quest's PowerGUI as my development environment for these scripts. I’m also a big fan of vCenter Orchestrator for setting up scheduled tasks to run in our vSphere environment.

 

There's a short list of apps that I can't live without. For example, NoteTab Light is my preferred text editor—excellent search and replace functionality—and Putty for SSH.

 

From Solarwinds, I don’t deal in CIDR often enough, so the Subnet Calculator is a go-to for me. I was also really impressed with Solarwinds Server and Application Monitor (SAM) when I saw it at a recent trade show.

 

SWI: OK, last question…what’s next for IT?

 

PS: Cloud is probably the biggest shift I see taking place in the IT industry right now and in the future. For many companies, cloud doesn’t fit yet. A lot of companies who’ve tested running in the cloud find that the lack of control and security are big sticking points. But it will mature and for companies that don’t have IT as a core competency, I think it’ll make more sense to move to Software as a Service for many of their functions. It won’t ever fully displace the datacenter—since legacy applications will have to run somewhere—but it will most likely shift a lot of IT employees around as the jobs move from private companies to cloud providers.

 

Cloud also has a lot of internal development shops thinking about architecture and how to get more resiliency and scale to applications. Scale-out used to just be for Facebook- and Google-sized companies, but now medium-sized companies and enterprises alike can benefit from investing in rewriting their applications with modern underpinnings.

End-users rely on technology providers to offer a simple solution to help solve complex problems; something that helps both business groups and their customers. Specifically, Web services expedite the application to application delivery for building integrated systems of web application components. Without the proper knowledge on monitoring performance, the application could run into a number of issues. Therefore, it’s important to understand the specifics around measuring the performance of Web services.

          

Measuring the Performance of Web Services

When you use various web applications, you’ll want to make sure the Web services connected to the application are thoroughly measured. Some reasons for measuring Web services are: to determine if services have enough storage capacity, are secure and free from vulnerabilities, operate well within SLAs, etc. Web components provide seamless data exchange and transfer. A prime example of this would be when you access your bank account online, check the account balance, make a transaction, or credit card payment. 

          

By keeping track of Web services, you will be able to better monitor application performance. In turn, issues can be identified and fixed ahead of time, ensuring that applications remain free of errors and glitches.

Organizations use Web services to improve business standards and processes. However, there’s still a possibility that you may run into issues. Some of these issues include:

  • Web service availability: No matter how well your website or Web application runs, it’s still prone to availability and performance issues. When your website is down, Web services will automatically experience availability issues.
  • HTTP performance: Web pages and websites are built using the HTTP communication protocol, a protocol which shares its working with Web services. The challenge with the HTTP protocol is that it establishes a connection with a server only for a specific time until the data is transferred. With this, a lot of time is spent creating and terminating connections to the server. In addition, Web services mostly utilize XML messages and conversion of data to and from XML can be time consuming during the communication phase.
  • Reliability: Web services mostly utilize the HTTP protocol which doesn’t assure data delivery and response. These reliability issues are only part of the HTTP protocol. You can choose to use other protocols to try and avoid reliability issues.
  • Authorization: Web services tend to ignore authentication standards and can directly have data transferred to an unauthorized source.

        

There’s several other challenges organizations face, such as scalability, testing of Web services, Web service communication, etc. that can cause performance degradation to your websites or applications.

          

Monitoring Web Services

Despite all these challenges, utilizing Web services has its own benefits, both from a business and technology standpoint. Before attributing those benefits, it’s crucial to first figure out how to avoid issues that will degrade performance. Before services affect the overall performance of the application and the end-user experience, it’s important to monitor them from the beginning. Not just the Web services, but the application, and any processes that may slow the service. Additionally, the server will have to be closely monitored. Monitoring Web services continuously will allow you to gain coverage into application health so you can start helping end-users and customers manage their SaaS and other internal applications. Here are some of the benefits of monitoring Web services:

  • Monitor internal and SaaS based apps running across all servers in the environment
  • Determine key Web service availability, latency, and validate the content returned of that query
  • Saves organizations time and money as Web services utilize protocols that are used widely by all organizations and therefore requires very little investment
  • Web services are easy to access when you want to track or modify data
  • Built-in alerting and reporting capabilities allow you to pinpoint the root cause of problems in minutes

        

Even if precautionary measures are taken, sometimes issues with applications just can’t be avoided. However, with the proper knowledge of measuring and monitoring Web services, application downtown can be identified and remediated in a timely fashion. Another great solution is to implement an application performance monitoring tool. A tool like this will make measuring performance of Web services substantially easier. Moreover, it will provide you with the insight you need to optimize overall performance.

            

Identify issues and start monitoring your JSON & SOAP Web services.

"If you know both yourself and your enemy, you can win a hundred battles without jeopardy."

-- Sun Tzu, The Art of War

 

 

Hi there! The past few weeks, as the Thwack Ambassador, I have enjoyed sharing the information security topics that interest me and getting great interactions with you. I have learned a lot from your comments and stories, sometimes fun, too. Who said that Geeks had no sense of humor? I highly recommend you to read kevincrouch4's daydream of syldra in my second June Ambassador blog post, There is No New Thing Under the Sun. What about BYOD?

 

In this last June Ambassador blog post, I would like to focus on the indispensable part of an information security system: You and Me. I'm going to share a few things that can keep us moving forward in this rapidly changing field and that can make us better contribute to the organization we work for.

 

Learn To Be A Hacker

Sun Tzu in the Art of War stated that to know your enemy, you must become your enemy. My employer is supportive in my infosec trainings. I was sent to take incident handling and pentesting classes and I learned a great deal of hacking stuff. I also learned about those hacker communities. You don't have to be a hacker, but you need to know how to protect from hackers. OK, you can call yourself white-hat hacker.

 

Read A Lot

In my early stage of my infosec career, I was captured by Richard Bejtlich's writings on his TaoSecurity blog. All four Bejtlich's books are in my library. There is much information that we need to learn and absorb available in books, web sites, blogs, and forums, etc. Oh, please tell me you read Kevin Mitnick's Ghost in the Wires.

 

Get Informed

I receive email feeds from US-CERT and SANS. My InfoSec Officer gets email alerts from MS-ISAC (Multi-State Information Sharing & Analysis Center). The security vendor specified information is useful, too. For example, the Zero Day Initiative (ZDI), found by TippingPoint, now part of HP, is a great source of information on vulnerabilities and attacks. Now, if Microsoft releases patches out of its regular Tuesday cycle, it will be a really big deal.

 

Keep Learning

I have to confess that the first time I heard MDM was in a vendor luncheon. I encourage you to attend conferences and vendor events. Black Hat is a good conference I think of. There is always something to learn, sometimes with nice meal(s). Also in those conferences and events you will have opportunities to network your fellows.

 

Understand Networking And Other IT Disciplines

I am not talking about Social Networking. I am talking about Networking. Nothing can lie what's on the wire, but you need to understand how stuff on the wire works, like Ethernet, TCP/IP, and higher layer protocols. An understanding of Windows login details will help you figure out the last break-in. And you may have already known that Python is a popular programming language among hackers.

 

Be Willing To Share

I am pretty active on Twitter and I keep Twitter for professional stuff; all personal/family/leisure stuff stays on Facebook. I got a lot of work-related information from my fellow Tweeps and they got from me. It's a win-win for us. You can't fight this infosec battle alone; you need support from your colleagues and other people. Even if you have the honor to work by yourself for infosec in your organization, share what you learned and what you know to others on different platforms, like this Thwack Community. We build up each other.

 

Are you with me in this journey? What's your opinion? I am looking forward to hearing from you.

Over the past few weeks I’ve been blogging about network configuration management, and the first few blog posts were fairly straightforward, I have to admit.  I didn’t have to spend a whole lot of time thinking about what to talk about, because it was for the most part, a natural evolution – talk about basic concepts, demonstrate using those concepts, show how to apply concepts to the operations cycle…. But as I find myself sitting down to write this, my final post, for this series I’m torn with which way to take it. 

 

One natural direction to go would be ‘taking things further’ or ‘optimizing your network configuration management environment’ but I think that those topics are so obvious that I probably wouldn’t be doing this post justice if I stopped there, so here’s what I’m going to do… Lets talk about those things, but then lets take it a step further and challenge ourselves to describe ‘what’s possible’ – things that we aren’t necessarily doing today as part of a configuration management process, but thinks we wish we were.   So without further a due, here goes...

 

Taking Things Further

 

It’s easy to sort of plot a course for ‘taking things further’ based on where you’re currently at in your network configuration management strategy… For those of us not doing anything yet, we simply start collecting data; for those of us collecting data but not using it, we start thinking about how to apply it to our operational processes.  We start thinking about automation; we start thinking about opportunities to leverage information that we now have, information that was previously out of reach. Whatever you’re doing now, trust me, it’s not enough, as I'm sure you'll find. Do more, and it’ll quickly start to snowball and you won’t be able to help yourself.

 

Thinking Forward

 

As I look back and think about the ways that network configuration management techniques and solutions have changed the way I do things operationally, from ‘getting backups’ in the beginning to ‘auditing standard configuration processes’ years later, I find that there’s always something more that’s possible, something more to achieve.  Here are a few of my thoughts on where we need to go next as an industry (and granted, some of these things are already possible, but not yet widely adopted, so this is certainly a point-of-view based vision that may vary based on your individual perspectives).

 

Artificial Network Intelligence – Wouldn’t it be amazing if we could ‘teach’ the network to react to changing environments based on a strategy or approach rather than a more specific ‘if you see this behavior, run this code”?  Think about a network that can be self-diagnosing based on reviewing configuration data as collected as part of network configuration management processes, observing network availability changes as reported by a network monitoring solution, and not only alerting the operations staff as to ‘there is a network outage’ but also ‘these are the things that just changed, and this is what we think is causing the problem’.   I mean, WOW, that would be something.  Kind of takes the phrase ‘self-healing network’ to the next level, doesn’t it?    

 

Proactive Fault Prevention – How about a solution that monitors a network for common ‘configuration events’ that most often create an outage, and alerting us proactively that ‘something bad may be brewing, you should look into this” – all before a service degradation or outage ever happens.

 

I’m sure there are half a dozen more types of things that are possible as an output of a strong network configuration management strategy… These are a few of my ideas…. So my challenge to you – give me a few of yours.   What do you think is possible that we haven’t attempted yet?  What kinds of outputs from a network configuration management solution could be leveraged to change ‘doing business as usual’ and revolutionize how we do things?  I’m curious to see the dialogue that we can have on this one.

 

It’s been a fun few weeks, and I hope these posts have been informative to you, have sparked an interest in network configuration management, and what’s possible, and helped to shape the way you do business.  Let the comments begin!

 

Josh  - @ciscovoicedude

As a network administrator, it’s not uncommon to hear the phrase, “The network is down!” User complaints begin to pour in such as the website being down, loss of Internet connectivity, data inaccessibility, emails down, application performance issues, data loss, etc. As a result, the IT department becomes bombarded with phone calls, emails, and help desk tickets.


When investigating the source of network downtime, administrators must also factor in the complexity of their network, diversity of device types, past network changes, unexpected device failures, multiple admin access, security breaches, and business/user requirements.

 

While this can be a lot of information for the network administrator and IT department to process, it’s important to maintain calm and remedy the issue quickly. Ultimately, network downtime leads to business downtime, which is costly to business organizations. In order for admins to effectively avoid and remedy such issues, and in turn save dollars caused by downtime, they must be equipped with the proper tools and processes.

 

Utilize effective tools & processesavoid costly network downtime:


Automation: The advantage of automation is that it improves processes and helps reduce costs. If the administrator could spend less time executing a configuration task, and without error, how valuable would that be? Therefore, it’s important to reduce time spent on processes so that time can be spent on more productive tasks. Automating configuration management tasks helps:

  • Eliminate time-consuming activities that don’t need admin intervention for execution
  • Reduce the number of errors due to manual efforts
  • Introduce standardization across process and improve consistency
  • Schedule tasks to ensure required routine tasks occur in a timely manner
  • Reduce dependency and requirement of resources to execute tasks manually

When administrators are able to limit the time spent on less important monotonous tasks, they can then focus on tasks where value can be added. Automation should start with key IT processes that present a pain point with execution. Some examples are daily configuration backup, bulk and routine configuration changes, device inventory, etc. Once processes have been automated, there should be a clear return on investment. Many times this can come in the form of a quantifiable amount of time saved that is then reinvested in higher value activities.

 

Monitoring & Control: Close monitoring helps to effectively identify and resolve issues. Whether it’s human error or non-compliance with a company policy, violations need to be detected and remediated quickly. With close monitoring, administrators gain control and are able to identify problems before an auditor does or before end users start complaining. Continuous monitoring helps:

  • Reduce network downtime as anomalies are easily detected and fixed on time
  • Be alerted on any configuration change in the network
  • Exercise control with all changes going through approval
  • Ensure effective enforcement of company policies throughout the network
  • Eliminate manual checks on configurations against regulatory requirements


Scalability & Security: As a business grows, so does the network. Whatever management tool is in use, it must be capable of easily managing more devices, regardless of the vendor or device type. In networks where there are multiple administrators, accessibility and accountability must be accommodated. Scalability and security features facilitate:

  • Efficient management of the network in spite of its growing size
  • Accountability of who changed what in the network and when
  • Time saving in deployment of the tool
  • More time available to improve network efficiency

 

As previously stated, configuration errors can result in downtime. Additionally, it’s unexpected, costly, and frustrating. However, downtime can be avoided by following a few industry proven best practices, like:

  • Regularly backing up all device configurations
  • Automating bulk change pushes
  • Practicing configuration change approval, reporting, and alerting
  • Maintaining an updated device inventory
  • Adhering to internal & external standards

 

Network downtime caused by configuration errors can certainly be avoided if you take the necessary measures. Further, it will save you substantial time dealing with issues and rid you of unnecessary stress. Learn more in this video that discusses network configuration best practices in detail so you can start saving your company dollars by improving the way you manage your network.

Enabling NetFlow will give you some insight on what your network actually carries

-- Nicolas Fischbach in Black Hat conference

 

 

Even though we discuss NetFlow in this article, the content also applies to other flow technologies: J-Flow, sFlow, NetStream, etc.

 

In the discussion of my first June Ambassador blog post The Cost of InfoSec Stewardshipjswan provided a great idea of reducing information $ecurity costs: implementing solutions that can be used for multiple purposes. He stated, for example, that NetFlow could be used by multiple departments in an organization like Operations, Security, Networking, and Help Desk.

 

My organization is mainly a Cisco shop, so we implement NetFlow. Since I split my working hours in Network Security and in Data Center / Campus Networking, I have opportunities to use NetFlow as an information security tool and a network performance tool. We, as many organizations, were introduced NetFlow analyzer by different vendors as a security tool. NetFlow analyzer vendors know that many organizations lack in knowledge of what's going on in their network. The vendors also know that by showing the executives the unexpected Top Talkers in the network after one or two days of the POC, the executives will be convinced to pull out the checkbook.

 

The NetFlow solution for security doesn't come cheap. The cost of the NetFlow analyzer is one thing. You need FULL NetFlow, rather than SAMPLED NetFlow, for network forensics. If you have a scale-out network, you'll need multiple flow collectors and in turn you'll need more storage. In the end, it is a good idea to present to the CIO that this solution is multi-purpose.

 

Do you want to hear a true story of the "alternative" usage of NetFlow? A Windows server admin accidentally clicked "Go" in "Default Server" of the Rapid Deployment System. Immediately hundreds of servers were… "defaulted" and started PXE boot. Countless alerts showed up in the NOC monitoring system. Within five minutes, the IT managers of different departments stormed in the poor network manager's office and asked what's wrong the network (pretty common, I guess). Executives commanded to reboot this switch and that router. After the pale-face Windows admin confessed his mistake to the people, everyone didn't know where to start to identify all damaged servers in the next 45 minutes.

 

The NetFlow guy in another office was notified about the incident. He calmly ran a NetFlow report for all PXE boot traffic for the period of the incident. That report saved many lives that day.

 

Does your organization implement NetFlow or any other flow technology for information security?

Is that technology also used for something other than security?

Do you have any story to share?

 

I hope your story is not that scary.

Happy Monday Everybody!

 

In my last two blog posts we talked about network configuration management. I talked about my previously experience with various tools and techniques, and how my needs have changed over the years as my job and networks evolved.  We then went into a kicking the tires exercise and talked about one scripting-based methodology for performing basic configuration archival, and hopefully gave you a glimpse of just a small sample of things that are possible with network configuration management techniques.  In this post, I’d like to talk about some of the benefits for implementing a network configuration management solution, types of information you can collect, and how we can use this information that we’ve gathered.

 

Configuration Archival

 

First and foremost, one of the areas of focus we commonly explore with a network configuration management strategy is that of configuration archival.  This can be something as simple as daily or weekly configuration backups into a repository for ‘just in case the device dies’ recovery, or it can be far more complex and deal with being able to go back in the past and review prior configuration revisions, whatever the reason.  I can’t tell you how many times I wish I had a reference point for how something used to work.  Even configurations I created myself as much as a decade or more ago can have value in the things I do today.

 

Bulk Change Deployment

 

Don’t think of configuration management as being a one-way task.  It’s not just about pulling information from devices, but can be a very valuable tool for pushing configuration to devices.  When I left a previous company where I had been for 8+ years, it would have been negligent had the employer not insisted that all critical passwords be changed.  The keys I held to that environment were pretty powerful.  I hate to laugh about it, but we didn’t have the best configuration management tools in that environment, and somebody had to manually touch a few hundred devices and change passwords.  With a little bit of scripting, or an off the shelf product, that task could have been greatly simplified.   Deploying even the most trivial or the most advanced of changes using a configuration management solution can pay off in spades.

 

Maintaining Standards / Detecting Unauthorized Changes

 

Especially if you have a mid-size or larger network, it’s likely that you employ some sort of ‘configuration standards’ or configuration consistencies that need to be maintained across a multitude of devices.  Maybe this policy based, or maybe it’s purely technical in nature, but with proper configuration management tools, you can audit your devices and make sure that things are being done ‘the way we planned’.  This can go a long way in ensuring operational availability of your environment.  You can also use this same logic to detect when policy violations may have occurred by detecting anomalous configurations. 

 

Assist with Inventory Management and Asset Control

 

Finally, but certainly not the least important, a network configuration management strategy can greatly help audit device inventory and assist with asset control.  Being able to pull a list of all active devices on your network sure helps come maintenance renewal time.  Don’t ask me how many times I’ve had to have an engineer perform a physical inventory because we weren’t 100% sure of what was installed in a particular location.

 

I’ve touched on many of the network configuration management benefits and key pieces of information that I use in my operations – tell me about yours, how you use the network configuration management tools that you have implemented in your network, and how it benefits running your operation. 

 

@ciscovoicedude

Those of you in IT administration (particularly IT security) know the challenges involved with protecting corporate data stored in your network. You also know that you regularly face an onslaught of new and sophisticated hacking methods, malware, and other threats. It is an uphill task to safeguard data—especially the files stored in workstations and servers.

 

We are seeing a surge of data breaches across all industries that incur both financial and reputational losses for companies. Whether an intrusion is external or initiated by an internal source, the ramifications are equally detrimental. This is why we have compliance standards (PCI DSS, HIPAA, NERC, SOX, etc.) that make it mandatory to monitor, detect, and prevent threats to sensitive information such as intellectual property assets, software code and programs, financial and earnings data, and sensitive customer information including account credentials and passwords. Aside from costs of security breaches, the penalty costs for compliance violations are also colossal.

 

This is where File Integrity Monitoring (FIM) will help. FIM monitors your files, alerts you about file accesses and changes, and protects files and data from attacks. FIM applies to all types of system files including key content/data files, database files, Web files, audio/video files, system binaries, configuration files, and system registries.

 

As part of your FIM efforts, you need real-time information about:

  • When files were accessed/created/modified/moved/deleted
  • Changes in file sizes and versions
  • User login information for file access and file modifications
  • Changes to attributes such as Read-Only, Hidden, etc.
  • Changes to security access permissions
  • Changes to directories and registry keys
  • Changes to the file’s group ownership

 

To obtain this information, you can listen to operating system events that are generated due to file activity and user access to files. However, it is a formidable task to sift through all the hundreds and thousands of file events to identify specific violations.

 

Real-Time File Integrity Monitoring Embedded with SIEM

Security Information and Event Management (SIEM) systems already gather log data for real-time security analytics. By integrating FIM with SIEM, you gain a view of your IT infrastructure making it possible to identify root causes of file access events and stop advanced threats which are difficult to detect without SIEM.

 

Some benefits of combining FIM with SIEM include:

  • User-aware File Integrity Monitoring: Complete user-activity monitoring. System, Active Directory®, and file audit events are correlated to obtain information about which user login accessed and changed a file. You can also identify other user activities before and after the files were accessed and modified.
  • Data loss prevention: Correlating file audit events with other log data gathered by SIEM, you gain advanced threat intelligence to help pinpoint breach attempts. With SIEM’s remediation capabilities, you can automate responsive actions (shut down systems, detach USB devices, disconnect the system from the network, log off users, delete user accounts, etc.) to prevent breaches and safeguard data.
  • Zero-day malware detection: Malware is one of the primary threats against file integrity and safety. SIEM detects zero-day malware via AV and IDS/IPS logs and correlate them with file audit events. This enables you to stop the malware in its tracks before it harms your secure files. You can use SIEM’s incident-response actions to kill malicious processes or quarantine systems for complete endpoint protection.
  • Continuous compliance support: FIM is a key requirement for many compliance regulations. SIEM systems offer out-of-the-box compliance templates to help you with compliance audits. Given that FIM results are included in your compliance reports, you can demonstrate to auditors that you have complete network information security and adhere to compliance regulations.

 

A significant benefit of combining file audit events with SIEM is that SIEM systems enable you to reduce the noise of unnecessary events. You can set up custom rules to alert you only when predefined correlation conditions are met. This eliminates the complexity of manually sifting through a barrage of file audit events.

 

Learn more about the new real-time file integrity monitoring feature in SolarWinds® Log & Event Manager version 6.0.

FIM_LEM_6_0_Image.png

In my last article, I talked about using various network tools over the years to automate network configuration management. We had some great comments and feedback to that post, so I thought I'd take a moment to do a deep dive on one of the tools I've used - PERL.   This is a great 'first step' into configuration archival for somebody who may be scripting-inclined, but may not have need to full blown change-comparison capability.  This isn't meant to be a programming tutorial on how to write code, but it's more of a jumping-off point for those of you who aren't using tools today and would like to kick the tires on what's possible.   This is more an exercise in getting you to 'think network management', and we're just going to use a little scripting wizardry to do just that. I'm going to assume that you've got a working installation of perl on your operating system of choice (for windows users, ActivePerl is a great choice), you linux and OSX users should already have perl installed with your operating system

 

Let me start off by sharing some code. (And feel free to share / modify / rant about this code. It's a compilation of code I've pieced together over the years)

 

showrun.pl

#!/usr/bin/perl
use  Net::Telnet::Cisco::IOS;
use  Net::Telnet::Cisco;

$InFile="routers.csv";
open INFILE,$InFile;
@CONFIG=<INFILE>;
close INFILE;

$username = "ciscovoicedude";               # the username to use when logging in
$password = "n0fax";                           # the matching password
$enablepw = "enablepassword";
@hosts = qw(@CONFIG);
   foreach $host ( @CONFIG )  {             # go through the array one element at a time
       chomp $host;
         my $conn = Net::Telnet::Cisco::IOS->new(HOST => $host);  # connect to the host
         $conn->login(   Name => $username, #  Log into the device
                          Password => $password);
     #   $conn->enable($enablepw);          # You can enable this line if you have to send an enable to the router. I don't.


@output = $conn->getConfig();               # Put the config in an array

        $outfile = ">" . $host . "-config.txt";  # create a filename called ">host-confg"
        open OUTFILE,$outfile;              # open a file for writing
        print "Writing $host to file\n";    # write a status message
        print OUTFILE @output;              # print the config to file
        close OUTFILE;                      # close the file
        $conn->close;                       # close the connection to the device
@output = ""

  }


 

 

In this short script, you'll see a basic 'running configuration' fetch from a device via telnet.   I'm using the Net::Telnet::Cisco::IOS module, so use your favorite perl package manager to install this module prior to trying to run the code.  In essence, the script will read an input file (I show it as being a CSV file, but really I'm only accessing a single column of data), from which it will determine the devices to connect to.  See the code below for an example of an input file.

 

example routers.csv

10.1.1.5
10.1.1.6
10.15.2.2
192.168.22.13


 

The script will iterate through each device in the CSV file, and issue a 'show run', and save the output to a text file named  "devicename-config.txt".  You could easily modify the script and input CSV to allow you to pass a different set of login credentials for each device, but I'll leave that as an exercise for the reader.

 

As you can see, with very basic scripting capability, you can automate configuration collection from a great number of devices. You could even automate running this using a cron job or scheduled task on your operating system.

 

Taking things a step farther, you could even modify this code to deploy configuration changes to devices, or do more advanced parsing and reporting. Heck, you could even store the configuration archive in a database for all of your devices and that opens up all kinds of opportunities down the road.

 

So I leave you with this - if you're one of those people who haven't yet dipped their toes into this, go ahead and take the plunge and let us know what you think!  For anybody who has written code to do even cooler things, I invite you to share some links with the rest of us so we can collaborate and learn together as a community.

 

Next time I'm going to talk about strategies with dealing with all of this data we have gathered, and useful things we can do with it.

 

@ciscovoicedude

Meanwhile, today’s security teams are grappling with the “any-to-any problem”: how to secure any user, on any device, located anywhere, accessing any application or resource. The BYOD trend only complicates these efforts. It’s difficult to manage all of these types of equipment, especially with a limited IT budget. In a BYOD environment, the CISO needs to be especially certain that the data room is tightly controlled.

 

-- Cisco 2014 Annual Security Report

 

 

A while back I was chatting with my colleague about BYOD (Bring Your Own Device) at lunch. I stated that we would need to pay more attention to the BYOD, as it had started to put more stress to our policy, network, and security. My colleague rolled his eyes and said the BYOD was nothing new; people had been bringing laptops to the company's network FOR YEARS.

 

The next morning, as soon as I saw him, I told him that the BYOD situation was different nowadays. I said that back in the old days, only certain persons brought ONE laptop PER PERSON to our network, but now EVERY person easily would have multiple devices to bring in. I counted mine: a Blackberry, an iPhone, an iPad, and a MacBook Pro. That colleague had the same number of devices, but lucky he left his iPad home for his son, so he brought in one less that day.

 

Many organizations has found that the wireless subnets that were designed a couple years ago always ran out of IP addresses; they have to constantly expand the wireless network scope. Not only the sudden increase of the number of devices in the network troubles the organizations, but also the organizations realize that they have to face the challenge, the complexity, of securing the network and their valuable data from the mobile devices. The traditional NAC doesn't seem to be able to handle this new trend. MDM comes into the picture, but is it mature enough?

 

According to the data of the mobile OS market share, Android currently dominates the market, followed by iOS. The problem is that a large percentage of Android devices still uses outdated releases. These devices are subject to security vulnerabilities. The information security of many organizations are solid and well-protected from outside but really weak from inside. Now more and more vulnerable devices are brought directly to the inside network. I'm sure you get the picture.

 

Does your organization face the same challenge? How does your organization protect itself from the BYOD? By both policy and MDM? Do you think the current MDM solutions are good enough?

 

I am looking forward to reading your stories and comments.

I took some time this week to update my list of not-so-common metrics that I find useful when monitoring for performance. With SQL Server 2014 having been released recently, it was as good a time as any to make the updates. You can see details about the metrics and my thoughts behind each one here:

 

http://thomaslarock.com/2014/06/performance-metrics-for-sql-server-2014/

 

In the post there is a link to download the full script. I've also embedded the script at the bottom of the post, for those of us too lazy to click on links these days.

 

The purpose of these metrics is that they help me gain insight into area of SQL Server performance that you simply don't see with standard tools or performance counters.

 

Let me know if you find the metrics useful, or if there are other metrics you would like to see me include.

As a production DBA for over seven years with a large-ish financial services company, I've seen more than my share of upgrades. Over the years I've added and modified my upgrade/migration checklists. The latest version, for SQL Server 2014, is available here:

 

Upgrading to SQL Server 2014: A Dozen Things to Check

 

The post covers a variety of topics:

  1. Hardware
  2. Compatibility checks
  3. Data integrity
  4. Upgrade versus migration
  5. Ensuring reliable performance

 

Have a look and let me know if there is something you have on your checklist that you feel should be on mine.

A recap of the previous month’s notable tech news items, blog posts, white papers, video, events, etc. For the week ending Friday, May 30, 2014.

 

News

 

AANPM: the new $1bn acronym in network management

What we need to know about Application Aware Network performance Management (AANPM) otherwise known as Network Performance Monitoring and Diagnostics (NPMD).

 

EE 4G "is starting to slow down"

EE's 4G performance is starting to suffer as more people join its network, according to network monitoring firm RootMetrics.

 

Cisco's Chambers tells Obama that NSA surveillance impacts U.S. technology sales

Cisco Systems’ CEO John Chambers has written to U.S. President Barack Obama, asking for his intervention so that U.S. technology sales are not affected by a loss in trust as a result of reports of surveillance by the U.S. National Security Agency.

 

Heartbleed Bug, One Month Later

The Heartbleed Bug after one month has affected hundreds of thousands of servers. Android and devices are still prone to the Heartbleed bug.

 

 

Blogger Exploits

 

Are the Internet of Things (IoT) & Internet of Everything (IoE) the Same Thing?

For quite some time, Cisco and Qualcomm have used the term Internet of Everything (IoE) to describe what almost everyone else refers to as the Internet of Things (IoT).

 

DDoS attacks using SNMP amplification on the rise

Attackers are increasingly abusing devices configured to publicly respond to SNMP (Simple Network Management Protocol) requests over the Internet to amplify distributed denial-of-service attacks.

 

SDN in Early Stages While Video Conferencing Goes Mainstream

Findings from Network Instruments' Seventh Annual State of the Network Global Study, in which 241 network engineers and managers were surveyed.

 

BYOD 'bill of rights' could allay security fears

A BYOD Bill of Rights has been proposed in a bid to protect both employees' privacy and business security.

 

 

Webinars & Podcasts

 

Connecting VMware to the Network

The nature of the “VMware vSwitch” and how it’s advanced patch panel capabilities can be integrated with the physical network.

 

Food for Thought

 

What SDN Could Mean for Networking Careers in One Picture – The Peering Introvert, Ethan Banks

 

The One Skill Worth Mastering In IT - There’s one thing that people overlook that can literally make or break your success in IT.

First off let me introduce myself - My name is Josh Kittle, @ciscovoicedude on Twitter, and I'll be one of the Thwack Ambassador's for the month of June.  I'm honored to have been selected to participate in this wonderful community, and look forward to the dialogue to come.


I first started managing infrastructure devices circa 2000, and I quickly learned that keeping track of how each and every device on my network was configured was going to be a very important task. Not just from a ‘just in case’ standpoint, although I have to admit that I initially approached this need from a ‘backup’ viewpoint, but also from a ‘what’s changed since X’, or ‘how did we have this configured when it worked 2 years ago for this other customer’, and let me tell you - the need for configuration management has never lessened, it has only grown exponentially.


Early on, I used client-apps such as (and including) the Kiwi Cattools product, which worked for a while when it was just me, but as my networks grew from 5 - 10 devices to hundreds of devices, and the number of people interacting with the network infrastructure grew, so did the need for capabilities for configuration management.


As time passed, I went on to write many of my own tools using scripting languages such as PERL and PHP, combining this ‘go fetch’ capability (thanks to perl modules like Net::Telnet::Cisco::IOS) with back-end database services running on mySQL.


So my question for you is this - what tools and / or processes have you developed to deal with network configuration management, and have they changed much over the years, or are you doing the same things now that you were doing in years past?


I’ll talk more about some of the specifics of how I’ve approached the use of these tools in future posts.

 

@ciscovoicedude

"Five billion years and it still comes down to money." -- The Doctor

 

 

Hello Thwack, this is Gideon Tam again! I was one of the Thwack Ambassadors for the month of January, 2014. Back in January we had great discussions and comments on the topics of the Log & Event Management in the General Security & Compliance area. If you haven't seen those discussions, here are the links to them:

 

To Log Or Not To Log: That Is The Question

Don't Panic and Know Where Your Logs Are

So Good They Can't Ignore SIEM

Winning The Loser's Game of Information Security

 

In the last discussion, Winning The Loser's Game Of Information Security, we generally agreed that the information security would not be a losing battle at all, even though information security breaches made to the headline news all the time (you might receive an email from eBay for changing password last week). Endurance and persistence, my dear fellows.

 

Recently we planned to replace our current internet perimeter firewalls with the New Generation Firewalls. The price quote we got after a few negotiations still popped out our eyes. This made me think of:

 

Is it possible to lower the cost of the information security?

 

In January we talked about that SIEM didn’t come cheap. Remember S in SIEM is $?   We also discussed the defense in depth. All these come with a huge price tag. Yes, we can cut some corners when IT budget permits, but we can only cut that much. If we are able to reduce the costs of information security equipment, what about the costs of the storage to keep the data in order to be HIPAA or PCI compliance?

 

Thanks to Steve Jobs and Jeff Bezos, we now face new IT challenges: BYOD, public and private clouds, etc. All the sudden we need to implement security measures that we haven’t done before. Of course, vendors help us by providing their awesome solutions and in turn we help them with higher budget.

 

You may say that we can save by using the open source projects/softwares/applications. I have some open source applications in my environment. I’ve found that it takes quite a bit of manpower to start, implement, and maintain systems with the open source applications. My colleagues and I have been thinking to replace those systems with vendor solutions. And open source is open source. For example, remember Snort -> Sourcefire -> Cisco?

 

To me, it’s very hard to drive the information security cost down. I, of course, will do my best to keep the expense as low as possible. But I’ll also provide information to the CIO to talk to the CEO and the CFO to request more funding. What do you think? If you don’t agree with me, it is perfectly fine; I want to hear from you and learn from you. Please drop some thoughts, comments, and feedbacks here.

Filter Blog

By date: By tag: