1 6 7 8 9 10 Previous Next

Geek Speak

1,464 posts

I’m in the middle of tough project right now, a large client is trying to convert a large number of physical SQL Servers to virtual machines. They’ve done most of the right things—the underlying infrastructure is really strong, the storage is more than an adequate, and they aren’t overprovisioning the virtual environment.

 

Where the challenge is coming in is how to convert from physical to virtual. The classical approach, is to build new database VMs, restore backups from the physical to the VM, and ship log files until cutover time. However, in this case there are some application level challenges preventing that approach (mainly heavily customized application tier software). Even so, my preferred method here is to virtualize the system drives, and then restoring the databases using database restore operations.

 

This ensures the consistency of the databases, and rules out any corruption. Traditional P2V products have challenges around handling the rate of change in databases—many people think read only database workloads don’t generate many writes, but remember you are still writing to a cache, and frequently using temp space. What challenges have you seen in converting from virtual to physical?

As a VoIP Engineer, I’m sure you’ve spent your fair share of time troubleshooting mysterious VoIP outages and call quality issues. You know, the ones that only happen when you’re not around and apparently only strike the most annoying end-user on the floor.

These kinds of problems can be very difficult to troubleshoot and provide root-cause analysis. Typically, troubleshooting the problem begins well after the fact and without any clues to help isolate the problem. This usually results in attempting to reproduce the issue by performing multiple test calls, checking the network for inconsistencies, and interrogating the end-user. If you’re the adventurous type, you’ll even go as far as installing your favorite packet capture software and reading through terabytes of captured network traffic, only to find that what sounded like a network issue is now gone without any trace.

 

Having a packet capture is very helpful when troubleshooting VoIP problems, but having them in the right place at the right time is a task easier said than done. If only there was a way to go back in time and place your packet capture between the end-user and the switch port prior to the call, surely you would have found the UDP errors or out of sequence packets that caused the garbled call. Unfortunately time travel is not an option, but do not despair. By enabling a couple of features on your Cisco® Call Manager and/or Avaya® Communication Manager, next time you’ll have the clues you need to isolate these problems without the guesswork.

 

Call Detail Records, CDRs, CMRs, and CQEs have many different names depending on the VoIP platform that you own, but they all provide very similar and useful call statistics. These include origin and destination of the call, starting time of the call, call duration, and termination codes. They also provide more important call quality statistics like jitter, latency, packet loss, and mean opinion score (MOS).

 

Each one of these metrics can have an effect on a call and, more importantly, can be used to isolate and resolve VoIP issues. In the second post of this four part series, I’ll cover each metric in more detail and explain what you need to look for when analyzing each while troubleshooting.

 

For more information on SolarWinds VoIP & Network Quality Manager watch our SE Guided Tour here.

The need and quality of your product brings your customers through the door, but it’s your customer service that keeps them coming back. Your customers’ experience with your post-transaction service and support is a big contributor to your organization’s ability to gain customer confidence and loyalty.

 

Ideally, you want customers to use your products without needing technical assistance. But, with technology products, customer support calls are inevitable. Your value as a company is determined by your customers and their encounters with your organization. This means your customer support methods need to proactively accommodate your customers’ needs and timelines.

 

Often, customers needing technical assistance are in a time crunch so speedy resolutions are always desirable. So, what can you do on your end to avoid service delays, reduce customers’ wait time, and ensure that customer interactions are positive? Improving performance and productivity starts with organizing your workspace. With the right tools and staff, you can deliver a high standard of customer service that results in high customer satisfaction.

 

Automation to the Rescue

You likely get the customer-support ball rolling by creating a ticket after the initial service request (i.e. email, phone call, text message). Then there’s triage, routing, escalating, assigning a technician, and notifying the customer that a ticket is created and a resolution is in the works. Where customer service is all about efficiency and quick turn-around times, automating as many of these tasks as you can significantly accelerates these processes and keeps you organized.

 

Your help desk is staffed with a group of hand-picked, talented technicians with particular skills sets and specialties. This is always useful for the many times you need to re-route or escalate tickets. Your help desk tool also brings a cache of talents to the table. It’s worth your time to configure your help desk tool to recognize the specialties of individual technicians, groups of staff, ticket categories, and priority levels. This cuts down on much of the effort in manually expediting tickets.

 

Where the Customer Service Rubber Meets the Road

Raise your hand if you’ve ever had a customer call to check the status of an issue. It’s understandable that people don’t like to wait. Drumming fingers on the desk gets old pretty fast. But what’s worse than waiting is not knowing how long the wait will be.

 

Give your customers the benefit of the human touch by keeping them informed throughout the ticketing process. Over communicating is not annoying when it’s for the benefit of the customer. The majority of positive customer service stories include a testimonial about how he or she was kept in the loop every step of the way.

 

When you close tickets, make it easy for your customers to give you some feedback and lend their opinions about their customer service experience. Again, this is where your automated help desk processes give you the luxury of effectively interacting and building a rapport with your customers to assure them that their needs are important to you.

 

The customer relationship is the heart of any business. When a high level of customer satisfaction is your goal, your help desk ticketing processes is how you achieve most of your customer-centric milestones for that goal. 

A recap of the previous month’s notable tech news items, blog posts, white papers, video, events, etc. - For the week ending Tuesday, Sep 30th, 2014.


News

 

Did VMware dismiss physical networking?

This week, analysts discuss the future of physical networking, the networking benefits midmarket companies enjoy and security challenges fueled by cloud and mobility applications.

 

Emerging challenges of maintaining content security

Content security in particular faces novel challenges as mobility and Web2.0 applications make web traffic volumes highly unpredictable, with massive spikes and troughs from one moment to the next.

 

PEAK IPV4? Global IPv6 traffic is growing, DDoS dying, says Akamai

First time the cache network has seen drop in use of 32-bit-wide IP addresses. Broadband and IPv6 are hot – and distributed denial-of-service attacks and IPv4 are not. Well, that's according to Akamai.

 

Why Deep Packet Inspection still matters

Deep Packet Inspection (DPI) is a technology that should offer much more weight than SPI (Stateful Packet Inspection).

 

IPv6 connectivity up; enterprises begin to take note

The 2014 North American IPv6 Summit showcased the growth of the protocol and shed some light on what enterprises are doing to (finally) adopt it.

 

Blogger Exploits

 

IPv6 Migration: 10 Reasons To Get Moving

You can't afford to put off that IPv6 transition any longer. Here are 10 ways migrating to the updated protocol will benefit your enterprise.

 

Aggregation in IPv6 routing curbs effects of Internet growth

IPv6 addresses are four times larger than those based on IPv4, but experts say that doesn't mean IPv6 will slow down routers. In fact, IPv6 routing makes route aggregation simpler, which leads to smaller routing tables.

 

Network timing: Everything you need to know about NTP

As more applications use IP networks, reliable distribution of authoritative timing information is becoming critical.

 

IoT Makes You Administrator Of Things

You may not realize it, but it's likely you're already supporting and managing "Internet of Things" devices on your corporate network. Here's a reality check.

 

Supporting IoT devices requires careful WLAN design

The Internet of Things (IoT) has become defined by the perennial refrigerator that orders more milk when you run out. But in this Q&A, one networking pro explains its role in the enterprise and the issues with designing a WLAN to support IoT devices.

 

The Hybrid WAN requires a new approach to network optimization

Now that the WAN is finally evolving, I think it’s time to take a look at the infrastructure that’s used to optimize it.

 

Food for Thought

 

Using Social Media to Connect with Your Customers - Communicate more openly and often with customers.

 

10 Pitfalls to Avoid When Implementing a BYOD Program - People naturally gravitate toward devices they know how to use well in order to get a task completed.

Statistics show that IT pro’s spend on average 38 hours each month managing IP addresses (even more for larger networks of 5,000 IP’s or more). More specifically, you spend a considerable amount of your day provisioning changes, monitoring faults, and troubleshooting problems. (SolarWinds® IT Pro Survey)


Your IP management in-basket probably stays pretty full by virtue of the fact that technology doesn’t stand still. Networks grow and manual management methods such as spreadsheets are becoming less practical. As you look toward more advanced DNS, DHCP, and IP address (DDI) management solutions, strongly consider adopting the following DDI best practices for effectively managing both IPv4 and IPv6 address blocks.


Transition to an Automated IP Management System - Automated Scanning & Discovery


The biggest disadvantage of manual IP address management (i.e. spreadsheets and management system databases) is the amount of time required to keep your documentation updated, which takes a toll on overall productivity. Manual IP management methods:

  • Have no change management system.
  • Inhibit IT team collaboration.
  • Are difficult to implement, oversee, and update.
  • Require more time and effort for troubleshooting.


An automated system saves time, reduces risk of human errors, and takes the guess work out of your IP address space management. An automated tool automatically scans, discovers, and updates your IP status and usage information in real time. An automated tool:


  • Includes active IP space and network monitoring.
  • Helps detect IP conflicts in the network.
  • Compiles historical data of IP address usage.
  • Manages DHCP and DNS servers on an organization-wide basis.
  • Helps simplify the transition from IPv4 to IPv6.


Manually tracking IP address usage is becoming less practical for organizations of all sizes. Trends such as BYOD, IoT, and Server Virtualization have greatly increased the consumption of IP addresses. Therefore, it’s apparent that automated IP address management is a more reliable way of tracking IP address usage.


Configure IP Address Lease Durations Based on IP Usage and Availability - Planning & Provisioning


All DHCP servers are configured with a lease configuration that contains information on the duration for which an IP address is leased out to a particular device/client. The DHCP server leases the IP to the client for a preset length of time and the client can automatically renew the lease further, if required. The lease duration has a direct impact on IP consumption in the network. Consider this fact when you configure the duration of leases.


For example, when a client leases an IP address, it retains ownership of the IP for the specified lease period. If the client needs the IP address for only a few minutes and the lease time is set to a few days, the IP remains tied up when it’s actually not being used. Here are a few tips for ensuring that IP addresses are used efficiently:


  • Set the DHCP server lease terms based on the types of clients that the DHCP server services.
  • Keep the lease duration for the DHCP server that services BYOD relatively short (i.e. a few hours).
  • Try to keep the lease durations of the mobile/wireless devices coming in and out of your network short.
  • Ensure that your DNS servers are updated with all of the leases, lease renewals, and lease expirations.


Proactively Identify IP Conflicts or Utilization - Monitoring, alerting, & Troubleshooting


You actually serve two purposes when you actively monitor your network: Avoid network performance issues and troubleshoot problems. This helps keep you aware of situations that can cause network downtime or slow employee productivity. Active monitoring adds a layer of security by detecting occurrences such as the presence of a rogue device on the network or an unrecognized MAC address in that subnet.


Obviously, you want to make sure that your reservoir of IP addresses never runs dry so you can ensure that all end-users can connect to the network. Tracking IP address consumption over time helps you project the eventual depletion of your IP addresses and make the necessary plans. Also, being alerted about events like IP conflicts, full scopes, DNS record mismatches, etc. is helpful for effective IP management.


Performing all these tasks using an IP management solution makes your IP address management life easier. A tool that tightly couples DNS and DHCP keeps you more organized because IP addresses and DNS records are updated simultaneously, which prevents problems resulting from outdated host records. Also, make sure that your IP management tool can function as a management layer that provides reliable interoperability between heterogeneous DNS and DHCP solutions.


Read this whitepaper to know more about best practices for configuring & monitoring DHCP, DNS, and IP addresses.

IPAM Blog-DDI Best Practices-Banner.png

It’s often a common complaint for IT pros in desktop support teams that they are spending more time managing service requests than actual IT support. This is because there is no easy way to track and manage service requests; there’s just too much of complexity around managing service requests when dealt with manually. As enterprises are realizing the value of IT service support solutions to improve their service delivery, there are many small companies that are still depending on disparate channels for logging incidents via phone, email, chat, and even verbal communication, and Excel sheet as a manual workaround for help desk. This is certainly not the most efficient means to handle service requests if you really want to save time and invest that in actual end-user support, and improve productivity. So, what are the complexities faced in IT service request handling and how can IT pros be relieved of all these ad hoc and time-consuming management chores?

Picture1.png

 

#1 Manual Ticket Assignment: Disappearing Time & Disorganized Process

When there is a ticket logged, the first thing you need to be able to do is sort it based on where it came from, what is the type of issue, what is the priority level, and assign it to the right technician who is available and has the skill-set to handle the ticket. Automation is key. Without automating the ticket assignment process, you will end up spending a lot of time analyzing all these parameters manually. With the help of pre-defined ticket assignment and routing logic built into a help desk system, you will be able to streamline the process and ensure each service request gets logged and immediately assigned to the right technician.

 

#2 A Bitter Tryst with Service Fulfilment Delays & Escalations

Before you start addressing service fulfilment delays, you must ensure you track time spent on incidents. Once you know what your average ticket response time and ticket resolution time are, you can start building baselines and set up SLA-breach alerts. This is being proactive than being reactive. Before you end up causing a delay on a ticket due any unplanned reasons, you will be alerted that you are nearing the SLA timeframe. This will help you make service delivery more seamless and prompt. Escalations are another issue when SLAs are not met. There has to be a strategic approach to planning and assigning an escalation workflow so that there is the next technician, group of technicians, or the next-level support personnel (manager or whoever is in charge) that gets, and the SLA-breach or client complaints are thus escalated.

 

#3 Ticket Resolution Status: Kept In the Dark

As a help desk technician or a desktop support agent, you need to be able to track and manage the status of tickets from creation to resolution, and also record all details for historical analysis. An Excel sheet is NOT the tool for this job when you have hundreds of service requests pouring in each day and you decide to sit and modify status on a spreadsheet. Whether it’s a single technician’s tickets, or tickets for a support team, or entire IT department, a help desk software allows you to capture, track and monitor status of tickets with status management dashboards and notifications.

 

Help desk software helps you automate and streamline your ticketing management process by providing the flexibility to create your own workflows and dynamic ticket assignment rules. Service request management should not be made complex when there are ample functionality with help desk to simplify that. When you have simplified ticketing management, you would have gained more time for actual IT support and issue resolution.

Shellshock is the name given to a vulnerability detected in the Bash which allows attackers to remotely compromise vulnerable systems allowing for unauthorized disclosure of information. Ever since news of the bug came out and the original fix actually not fixing the issue, attackers have been using ‘masscan’ to find vulnerable systems in the Internet. This means network–based attacks against *nix based servers and devices through web requests or other programs that uses Bash is happening. Check Robert Graham’s blog here to learn more.

 

Your first step should be to test if your version of bash is vulnerable by typing the following in your command line:

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"


If the system is vulnerable, the output would be:

 

vulnerable

this is a test

 

That means you need to patch your server’s Bash as soon as possible. In case your network devices are vulnerable, contact your vendor. For Cisco’s list, check the link here and for SolarWinds list, check this blog.

 

My first thought was, because the access vector for shellshock is the network, would the network show signs of an attack leveraging the bash bug?

 

Here is some info from redhat.com blog:

“The vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the Bash shell. These variables can contain code, which gets executed as soon as the shell is invoked.”

 

In short, the bash shell allows function definitions to be passed using environment variables that share the name of the function and the string "() { :;};" means it is a function declaration. So, the initial attack vector will always include (or starts with?) the “() {“ sequence and that should be the signature for detecting bash attacks.

 

My next thought was, if you don’t have an IDS or IPS on which you can define the signature, can your other network devices detect the “() {“ signature in the HTTP header and help you mitigate an attack?

 

Let us talk ‘Cisco’ here. Cisco devices have a couple of options for HTTP header inspection. One is NBAR but NBAR’s HTTP header inspection is limited to 3 fields as far as client to server requests are concerned, namely ‘user-agent’, ‘referrer’ and ‘from’, none of which will hold the signature “() {“.

 

The 2nd option I found for HTTP header inspection is Zone-Based Policy Firewall (ZFW) which Cisco states is available on Cisco routers and switches from IOS 12.4(6)T onwards. ZFW supports application layer (Layer 7) inspection including HTTP headers that can then be used to block traffic. ZFW allows you to use Class-Based Policy Language (remember QoS?) to define what traffic has to be matched and what action has to be taken.

 

With ZFW, you can inspect and block HTTP traffic that includes the regex “\x28\x29\x20\x7b” in the header. If you are wondering why “\x28\x29\x20\x7b”, that is the hex format for “() {“. Refer the chart here to see how we converted our signature to hex regex.

 

Back to the bash bug and ZFW, based on Cisco configuration guides, a sample configuration for a bash attack mitigation should look like the below but supported commands could change depending on the IOS versions.

 

Define a parameter map to capture the signature we were looking for:

parameter-map type regex bash_bug_regex

pattern “\x28\x29\x20\x7b”


Create the class map to identify and match traffic:

class-map type inspect http bashbug_classmap

   match req-resp header regex bash_bug_regex

 

Put the class under a policy to apply a reset action to the traffic that was matched by the class map.

policy-map type inspect http bashbug_policymap

   class type inspect http bashbug_classmap

      reset

 

While the HTTP header inspection may cause a CPU shoot up, ZFW would still be a good option if you cannot apply any available patches right now. ZFW is also an extensive topic and can have implications on your network traffic if not implemented properly. Read up about ZFW with configuration examples here:

http://www.cisco.com/c/en/us/support/docs/security/ios-firewall/98628-zone-design-guide.html

http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/sec_data_zbf/configuration/xe-3s/sec-data-zbf-xe-book/sec-zone-pol-fw.html#GUID-AD5C510A-ABA4-4345-9389-7E8C242391CA

 

And any alternatives to ZFW for network-level mitigation of bash bug based attacks?

I briefly touched on IP SLA in one of my previous posts, and I wanted to spend a bit more time on this topic simply because IP SLA is a very powerful tool for monitoring the health of your VoIP network, and one of my favorites! (Plus it's not just for your VoIP infrastructure)

 


A few types of IP SLA's that are useful in a VoIP deployment:

UDP Jitter - This one goes without saying, as it probably the most common IP SLA that is deployed in a VoIP network. After all keeping track of the jitter within your network could be the first sign of a circuit/WAN/LAN issue or possibly a QoS policy that needs to be looked at.

DHCP - This one is not specific towards the VoIP infrastructure but hear me out. The VoIP phones probably won't be rebooting too often but if the endpoints are not able to receive their proper IP addresses in a timely fashion you will definitely will receive some tickets from the end users. Without an IP address those IP Phones are not really IP Phones are they.

DNS - Like DHCP, this one is not specific for your VoIP infrastructure, but if your IP phones are configured to perform DNS lookups to find their Call Manager or to utilize any specific services. Your VoIP infrastructure is more than likely dependent on DNS, keeping an eye on DNS performance could definitely give you a troubleshooting edge.


Historical IP SLA Information can save you!

Having historical information for IP SLA statistics can be just as useful as the IP SLA tool itself, after all having basic monitoring to prove network availability is one thing. Being able to provide provide performance statistics at the application level is another useful level entirely.

The historical information can also be used to identify and/or track peak usage times within the network, especially if you can see a performance degradation everyday at a specific time.


Yes, you can have too much of a good thing!

IP SLA's can be great for monitoring a network and provide a lot of valuable troubleshooting information, however you will definitely want to keep in mind the amount of SLA traffic you are generating (especially if you are marking all your UDP Jitter operations as EF/DSCP-46) this could generate issues in itself by wasting precious space in your priority queues if you designed your queues pretty lean.


Something else to consider, if you are terminating all your IP SLA's to a single router you may want to keep a close eye on the resource utilization of that router. The IP SLA process doesn't utilize a lot of resources but multiple that by 50 or above & it will definitely be a performance hit, & even possibly throw off the IP SLA statistics/results. In worse cases if the CPU/memory spikes often enough you could start seeing issues in your data plane forwarding.


IP-SLA-In-Path.PNG

IP-SLA-Consolidate.PNG


Like everything else in life it is a matter of finding a good middle ground that provides you the monitoring you need without having any negative effects. So to what extent are you utilizing IP SLA in your network and what operational types are you relying on? Are you even a fan of IP SLA's? (I know I've definitely got some horror stories, but I do also have a lot of good stories)

mwpreston

To the cloud!

Posted by mwpreston Sep 28, 2014

Private, Public, Hybrid, Infrastructure as a Service, Database as a Service, Software Defined Datacenter; call it what you will but for the sake of this post I’m going to sum it all up as cloud.   When virtualization started to become mainstream we seen a lot of enterprises adopt a “virtualization first” strategy, meaning new services and applications introduced to the business will first be considered to be virtualized unless a solid case for acquiring physical hardware can be made.   As the IT world shifts we are seeing this strategy move more towards a “cloud first” strategy.  Companies are asking themselves questions such as “Are there security policies stating we must run this inside of our datacenter?”, “Will cloud provide a more highly available platform for this service?”, and “Is it cost effective for us to place this service elsewhere?”.

 

Honestly, for a lot of services the cloud makes sense!  But is your database environment one of them?  From my experiences I’ve seen database environments stay relatively static.  The database sat on different pieces of physical hardware and watched us implement our “virtualization first” strategies.  We’ve long virtualized web front ends, the application servers and all the other pieces of our infrastructure but have yet to make the jump on the database.  Sometimes it’s simply due to performance, but with the advances in hypervisors as of late we can’t necessarily blame it on metrics anymore.  And now we are seeing cloud solutions such as DBaaS and IaaS present themselves to us.  Most of the time, the database is the heart of the company.  The main revenue driver for our business and customers, and it gets so locked up in change freezes that we have a hard time touching it.  But today, let’s pretend that the opportunity to move “to the cloud” is real.

 

When we look at running our databases in the cloud we really have two main options; DBaaS (Database functionalities delivered directly to us) and IaaS (The same database functionality being provided, but allowing us to control a portion of the infrastructure underneath it.)  No matter the choice we make, to me, the whole “database in the cloud” scenario is one big trade off.  We trade away our control and ownership of the complete stack in our datacenters to gain the agility and mobility that cloud can provide us with.

 

Think about it!  Currently, we have the ability to monitor the complete stack that our database lives on.  We see all traffic coming into the environment, all traffic going out, we can monitor every single switch, router, and network device that is inside of our four datacenter walls.  We can make BIOS changes to the servers our database resides on.  We utterly have complete and ??? control over how our database performs (with the exception of closed vendor code )  In a cloudy world, we hand over that control to our cloud provider.  Sure, we can usually still monitor performance metrics based on the database operations, but we don’t necessarily know what else is going on in the environment.  We don’t know who our “neighbors” are or if what they are doing is affecting us in anyway.  We don’t know what changes or tweaks might be going on below the stack that hosts our database.  On the flip side though, do we care?  We’ve paid good money for these services and SLAs and put our trust in the cloud provider to take care of this for us.  In return, we get agility.  We get functionality such as faster deployment times.  We aren’t waiting anymore for servers to arrive or storage to be provisioned.  In the case of DBaaS we get embedded best practices.  A lot of DBaaS providers do one thing and one thing alone; make databases efficient, fast, resilient and highly available.  Sometimes the burden of DR and recovery is taken care of for us.  We don’t need to buy two of everything.  Perhaps the biggest advantage though is the fact that we only pay for what we use.  As heavy resource peaks emerge we can burst and scale up, automatically.   When those periods of time are over we can retract and scale back down.

 

So thoughtful remarks for the week – What side of the “agility vs control” tradeoff do you or your business take?  Have you already made a move to hosting a database in the cloud?  What do you see as the biggest benefit/drawback to using something like DBaaS?   How has cloud changed the way you monitor and run your database infrastructure?

 

There is definitely no right or wrong answers this week – I’m really just looking for stories.  And these stories may vary depending your cloud provider of choice.  To some providers, this trade-off may not even exist.  To those doing private cloud, you may have the best of both worlds.

 

As this is my fourth and final post with the ambassador title I just wanted to thank everyone for the comments over the past month...  This is a great community with tons of engagement and you can bet that I won’t be going anywhere, ambassador or not!

You likely have two common tasks at, or near the top of your IP address management To-Do list:

  • Quickly find and allocate available IP addresses
  • Reclaim unused IP addresses


These tasks seem simple enough. But IP address management is anything but simple. With IP documentation that runs into rows of data and no information about who made updates or when, you could find yourself dishing out incorrect IP address assignments. If that isn’t enough, you might also contend with IP address conflicts, loss of connectivity, loss of user productivity, network downtime, and more. IP address management and troubleshooting can be time consuming and require a lot of manual effort.


With all the tasks and teams involved with network administration, an organized and efficient IP address management process is vital in preventing network downtime. When you oversee hundreds and thousands of IP addresses, you need to stay on top of your IP address management efforts. Identifying and effectively mapping IP address space is a critical piece of the IP address management puzzle.


Accurate mapping of IP address space is important to clearly see your IP address usage stats. IP address space is mainly divided into three units:

  • IP address blocks - a large chunk of IP addresses that are used to organize an IP address space
  • IP address ranges - small chunks of IP addresses that correspond to a DHCP scope
  • Individual IP addresses - map to a single IP address range


When you map IP address space, you might consider using one IP address block for private IP addresses and another block for public IP addresses. Similarly, you can create smaller IP address blocks based on location, department, vendor, devices, etc. However, you do not deploy and manage these IP address blocks on the network like you would IP address ranges or individual IP addresses.


Rules for IP Address Ranges and blocks:

  • IP address ranges are mapped to IP address blocks
  • Multiple IP address ranges can be mapped to a single IP address block, but not to multiple IP address blocks
  • IP address ranges mapped to the same block cannot overlap
  • When an IP address is mapped to a range, actions like adding, updating, or deleting on IP address fields in a range will affect all the IP addresses in that range

 

Once you define your IP address blocks and ranges, be sure to clearly document and inventory them.

 

When you manage a network with hundreds and thousands of IP addresses spread over different locations with multiple departments and projects, IP requirements commonly change. Under these circumstances, manual IP address management is difficult and inefficient because IP addresses are prone to IP duplication and assignment issues, making troubleshooting even more difficult.

 

The alternative is to use an IP address management tool that automates the entire process of IP address discovery and management. These tools simplify the process of quickly finding an available IP address or reclaiming an unused IP.

 

Top 3 benefits of using an IP address management tool to manage your IP space:

  • Better control and management of both IPv4 and IPv6 addresses: Easily organize IP ranges and individual addresses into logical groups.
  • Reduce manual errors and downtime due to IP address issues: Maintain a centralized inventory of your IP addresses and track changes.
  • Proactive management and planning: Close monitoring of IP space usage and easy identification of both over-used and under-used IP address space.

3f0d620.jpg

 

Last month, we shined our IT blogger spotlight on Michael Stump, who was one of the delegates at the recent Tech Field Day Extra at VMworld. This month, I figured why not keep it up? So, I tracked down the renowned Mr. Ethan Banks (@ecbanks), who also participated in the event. Without further ado, here’s what he had to say.

 

SW: First of all, it seems like I see your name just about everywhere. When it comes to blogging, where do you call home, so to speak?

 

EB: I blog in several places these days. My personal hub site is Ethan Banks on Networking, which is devoted to networking and closely related topics. I also write many of the blog posts that accompany the podcast I co-host over at Packet Pushers, a media company that discusses the networking industry with engineers and architects from around the globe. On top of that, I blog roughly once a month or so for Network Computing. Somewhat less frequently, an article of mine will be published by Network World or one of the TechTarget sites. If you poke around, you can find a few other places my writing has appeared as well, including here on thwack.

 

SW: Wow! It sounds like you’re staying busy. And someplace in there I’m sure you find time for a day job, and not to mention a hobby or two, I hope.

 

EB: I have two day jobs, actually. I’m lucky enough that the one is my blogging. I’m known to many in the networking industry as a writer, podcaster, and co-founder of Packet Pushers. In addition, I’m also the senior network architect for Carenection. Carenection is a technology company that connects medical facilities to medical services such as real-time, video-over-IP language translation via our ever-expanding network covering the US.

 

As far as hobbies go, I enjoy backpacking in the wilderness very much. I do my best to get out on the trails three or four times a month and stomp out some scenic miles in the mountains. I’m lucky enough to live in New Hampshire where there is a great outdoor culture and rich heritage of trails—over 1,400 miles of them in the White Mountain National Forest. My goal is to hike all of those miles. I’ve bagged over 22 percent so far!


SW: It’s fantastic that you’re able to count your writing and podcast efforts as a day job. How did that all get started?

 

EB: I started blogging in January 2007 when I committed to Cisco’s notoriously difficult CCIE program. Blogging was part of my study process. I’d read or lab, then blog about the important information I was learning. Blogging forced me to take the information in, understand it and then write it down in a way that someone else could understand it.

 

SW: And I guess it just grew from there. What are some of your most popular topics?

 

EB: The most popular post I’ve written this year was about my home virtualization lab. The post described in detail my choice of server and network gear, and offered pictures and links so that folks could jump off of my experience to explore their own server builds. Reddit found the article early on and has continued to drive an incredible amount of interest months later.

 

Other popular articles are related to career. People like to know what the future might hold for them in the networking space, which has been changing rapidly in recently years.

 

Yet other popular articles are “how to” explanations of common technical tasks. For example, I've spent some time with Juniper network devices running Junos, which are very different to configure than Cisco devices running IOS or NX-OS. These articles do well simply because of SEO—people with the same pain point I had find my article via Google, and can use it to help them with their configuration tasks.

 

SW: In between it all, are there any other bloggers you find the time to follow?

 

EB: There are far too many to name, to be fair. I subscribe to several dozen blogs, and usually spend the first 60-90 minutes of my day reading new content. A few that are worth Googling are Etherealmind (broad, insightful networking perspectives by my friend and podcast co-host Greg Ferro), the CloudFlare blog (these guys are doing some amazing things and describe how they push the envelope), Keeping It Classless (my friend Matt Oswalt is on the cutting edge of networking and writes outstanding content), Network Heresy (a blog by some folks working in the networking industry and thinking outside the box), and The Borg Queen (networking perspectives from Lisa Caywood, of one of the most interesting people in IT I know).

 

SW: So, we talked about how you got started with blogging, but how did a life in IT begin for you?

 

EB: In a sense, I got into IT out of desperation. I have a CS degree that was focused on programming, but my early jobs out of college were not doing development work. Instead, I spent a year as a school teacher and a year in banking. After a cross-country move to be closer to family, I couldn't find a job in banking in the area I'd moved to. At that time, the banking industry was consolidating, and getting work was very hard. So, I took out a loan and enrolled in a school that taught official Novell Netware training. I quickly became a Certified Netware 3 Administrator, landed a contract supporting a company in my area, and never looked back.

 

SW: Being an IT management company, I of course always like to ask guys like you who’ve been in IT for a good while about what tools they can’t live without. What are some of yours?

 

EB: Any tool that can track historical routing table topology information is a favorite of mine. I’m sometimes called on to find out what changed in the middle of the night that caused that 10 second blip. That’s impossible to do without the right tool. Packet Design’s Route Explorer, a product I admittedly haven’t used in a few years as I’ve changed jobs, is such a tool that knows exactly the state of the network, and could rewind to any historical point in time. Fabulous tool.

 

Over the years, I’ve also used SolarWinds NPM, NTA, NCM, VNQM, Kiwi CatTools, and the Engineer’s Toolset. I’ve also spent time with SAM and UDT. My favorites have to be the tools that let me get at any sort of SNMP OID I want. So, NPM is the SolarWinds tool I’ve spent the most time with and gotten the most from, including NPM’s Universal Device Poller feature and Network Atlas. Along the same lines, the Engineer’s Toolkit is a great resource. I’ve saved myself lots of time with the Switchport Mapper and also caught bandwidth events in action using the real-time gauges. These are simple tools, but reliably useful and practical.

 

SW: To finish us off, tell me a few of the things you’re seeing happen in the industry that will impact the future of IT.

 

EB: There are three trends that I think are key for IT professionals to watch over the next several years.

 

First, hyperconvergence. Entrants like VMware’s EVO:RAIL are joining the fray with the likes of upstarts Nutanix and Simplivity, and with good reason. The promise of an easy-to-deploy, fully integrated IT platform is resonating well with enterprises. Hyperconvergence makes a lot of sense, obscuring many of the details of complex IT infrastructure, making it easier to deliver applications to an organization.

 

Second, automation. Configuring IT systems by hand has been on the way out for a long time now, with networking finally heading into the automated realm. Automation is pushing IT pros to learn scripting, scheduling, APIs, and orchestration. The trick here is that automation is bringing together IT silos so that all engineers from all IT disciplines work together to build unified systems. This is not the way most IT has been building systems, but it appears to be the standard way all IT systems will be built in the coming years.

 

Finally, the move from public to private cloud. There’s been lots of noise about organizations moving their internal IT resources out to the public cloud, right? But another trend that's starting to show some legs is the move back. Issues of cost and security in the public cloud are causing organizations to take a second look at building their own private clouds instead of outsourcing entire IT applications. This bodes well for IT folks employed by enterprises, but also means that they need to skill up. Building a private cloud is a different sort of infrastructure than the traditional rack and stack enterprise.

No matter what bandwidth monitoring solution you use to enhance network availability, you will find the network is often riddled with issues pertaining to latency, packet loss, and jitter. To avoid unwanted traffic from interrupting key processes in your network, look for a way to safeguard your business critical applications by prioritizing them. Implementing and monitoring Quality of Service (QoS) policies can ensure optimal application and network performance, and help network admins weed out any unwanted traffic consuming a bulk of your bandwidth.

Why Should You Implement and Monitor QoS?

Quality of Service (QoS) is a set of policies that help network administrators prioritize network traffic for applications based on business impact, and guarantees enough bandwidth to ensure high network performance. It’s a mechanism that’s internal to network devices and determines which traffic gets preferential access to network resources.


Additionally, QoS is fundamental in efficiently handling traffic. A network that doesn’t have QoS policies runs with a best-effort delivery, meaning all traffic is routed with equal priority. In times of low network traffic, there typically won’t be any problems, however, what happens when traffic is heavy and congested?


Without a QoS policy, all network traffic packets have an equal chance of being dropped. A QoS policy will prioritize specific traffic according to its relative importance and application type, and use congestion-avoidance to ensure its delivery. For instance, under congestion a network device might choose to queue the traffic of applications that are more latency-tolerant. In turn, allowing traffic of applications that are less latency-tolerant to be forwarded immediately to the next network device, such as streaming media/videos, IP TV, VoIP, etc.


How does QoS Help You in Troubleshooting?


Most network administrators implement QoS policies to make sure their business-critical applications always receive the highest priority.  Additionally, QoS monitoring helps to enhance your network monitoring ability, allowing you to adjust your policies based on your priorities and can also aid in troubleshooting.

Advanced network monitoring software allows you to view network traffic segmented by class of service methods, by monitoring NetFlow data. By using Class based Quality of Service or CBQoS in network monitoring software, you can measure the effectiveness of your QoS policies, and quantify bandwidth consumption by class map.


a.png

 

Avoid a network riddled with latency, packet loss, and jitter issues. Implement QoS to ensure sufficient network bandwidth is available for business critical IT applications. More importantly, keep a lid on unwanted traffic that could possibly consume your network.

 

Learn More

Continuous Monitoring: Managing the Unpredictable Human Element of Cybersecurity

Deep Packet Inspection for Quality of Experience Monitoring

Let’s face it, there is always a possibility of networks being affected by worms and viruses. If it happens, they can replicate at an alarming rate and slow your network considerably. While you may be trying to resolve the issue as quickly as possible, the fact is your organization is experiencing downtime. Bottom line impact of downtime can be devastating, especially in terms of monetary loss.  Sometimes, it is time consuming and tedious to troubleshoot without proper stats of the network or router/switch ports.


Say, you just received an alert that one of your access switch is down. Naturally, your performance monitoring tool will show that node in red, but you notice that some of the pings are still getting back. So, you run a web based traceroute, which shows that some of the nodes in a path are reporting higher response times than usual, whereas the target device replies sporadically.


Having historical visibility into erring ports and port utilization/errors is good, but it’s immensely helpful to visualize these in real-time. When you visualize the interfaces in a chart, you could easily see the high utilization of the uplink interface and the associated access switchport that generates the high traffic. Now, all you have to do is SSH to the switch and shut down the port, so that the response times in Tracert and Bandwidth utilization values are restored to their in normal values.


To find out if a router or switch is routing significant amounts of traffic, you need a real-time troubleshooting software that can isolate exactly which port is generating the traffic. For example, using Engineer’s Toolset’s Interface Monitor, you can get real-time statistics by capturing and analyzing SNMP data from multiple routers and switches simultaneously. You can watch live monitoring statistics for both received, transmitted traffic (Rx, Tx or Rx + Tx) from a set of statistic groups like percent utilization, bandwidth, total bytes transferred, error packets, and discarded packets. You can also set warning and critical thresholds for specific interfaces.


In order to do this, select the interfaces you want to monitor and configure in Interface Monitor: polling interval, metrics, and thresholds based on your requirements. You can set the polling interval to collect statistics as frequently as every 5 seconds.

interface-Mon-graph.png


Cyber-attacks can hit your network hard and keep you awake at night. So, why not avoid the onslaughts of viruses and worms with properly maintained IT infrastructure and effective monitoring? Reacting to a threat faster and keeping your networks ticking with optimal performance is something that can keep the network admins on their toes. The ability to respond quickly to bandwidth and network performance issues using the right tools can save time and money, and increase the overall productivity of the users using the network.


tech-tip-RTM Interface.png

Let's face it, you cannot talk about VoIP without hearing about QoS (Quality of Service) for many companies a VoIP deployment is the only reason they implement QoS. After I think about it for a while, I realize 90% of the companies I've deployed QoS for/at were in preparation for or to improve a previous voice deployment. The first question I used to get is 'Why do I need to deploy QoS? I have 1Gb links that's more than enough bandwidth, well let's go back to basics. In my mind voice is a pretty stable and timid IP stream it’s the rest of the non-VoIP IP traffic that is bursty and rude so from my perspective it's not always a case of managing low-bandwidth links for VoIP traffic, it's a matter of protecting the VoIP RTP streams from all the other day-to-day data traffic. Plus, we also have to consider not every company can afford 1Gb+ private WAN links at every site, so in that case it does become a matter of reserving bandwidth for VoIP traffic.

 

QoS is definitely one of my favorite topics to discuss & design for, especially because it's one of the topics that every company does differently and they usually have different goals for the QoS implementation.  I'll kick it off with a few points I like to mention out of the gate.

 

Don't queue TCP & UDP traffic together! This is definitely one of my favorites, I've seen many people out there lump up a bunch of applications together and throw them in a single queue, it sounds like a good idea but remember how TCP & UDP fundamentally behave when packet loss occurs. If the congestion avoidance mechanisms (RED/WRED) kick in and a UDP packet is dropped the flow continues like nothing happened. Where-as if a TCP packet is dropped the stream decreasing the window size and less data gets transferred over time until the endpoints negotiate the window size back up to where it was. You might find yourself in a situation where TCP throughput is suffering but the UDP applications function like normal because they have essentially taken up the whole queue. This is a rather tough situation to troubleshoot.

 

Get Sign-off from Management - This may sound odd or trivial at first but it is usually best to work with the business (was that layer 8 or 9 again I always confuse those two?) to determine what traffic allows the company to bring in the money. You also might want to take that a step further and ask that same management/business team to put a priority on those business applications, so they can decide which applications can/should be dropped first if bandwidth is not sufficient. After all, the last thing you want to do is explain to your own management or business teams why you are dropping business critical traffic. It is a good idea to make sure they are standing behind your QoS configuration.

 

Trust boundaries - Deciding where you place your trust boundary can change your configuration & design drastically, after all if you decide to place your trust boundary on a sites edge/WAN router then you only need to worry about the queuing outbound on the WAN and the inbound markings. However if you setup your trust boundary on your access switches then you may also need to consider layer 2 QoS mechanisms and the queuing from the layer 2 device to the upstream layer 3 WAN router. 


Trust Boundary.PNG

Those are a few considerations I take into account when working with a QoS, what else do you consider for deploying QoS in your environment?

Problem management is a crucial part of IT service management that requires support teams to diagnose the ‘root cause of incidents’ (identified as problems), and determine the resolution to these problems. This is not an easy task, and specifically for mid-size to large organizations, where the number of incidents logged is considerably high, it becomes harder to handle this process. Typically problem management has been reactive in nature, i.e. getting into inspection mode after an incident has occurred. While incident management will help restoring the service temporarily, problem management comes afterwards, and ensures there is a permanent fix making sure the incident will not recur.proactive.jpg

 

It is also important to look at problem management from a proactive behavioral standpoint. Here, IT pros analyze past incidents, extrapolate trends and investigate whether any specific conditions in the IT framework will cause problems to occur. Proactive problem management overlaps with risk management as we have to constantly keep studying the IT infrastructure, identify risks and mitigate them before they turn into problems and affect service delivery.

 

The help desk plays a vital role in both types of problem management.

  • In reactive problem management, a help desk ensures incidents are recorded properly and easily tied to problems, while also supporting customizable workflows to handle incident and problem tickets. Help desk integration with remote control tools work to the advantage of speeding up reactive problem management, and allowing admins to quickly and remotely solve end-user desktop issues causing problems.
  • In proactive problem management, a help desk provides data about various entities of the service management model (operations, infrastructure, people, process, and service requests), and helps you get better at understanding and identifying risks. If your help desk could integrate with IT infrastructure management tools like network and server monitoring to associate network, application & server issues with incident tickets, it’ll help you identify trends for problems related to infrastructure causing problem ticket.

 

It is important for IT departments to decide and plan in advance a feasible problem management methodology that can be applied to known problems easily, and is also flexible to adjust and apply to new problems. Instead of siding with reactive or proactive approach, IT should strategize both and be prepared to fix problems fast.

 

Share with us how you handle problems as part of your IT service support process.

Filter Blog

By date:
By tag: