1 15 16 17 18 19 Previous Next

Geek Speak

1,614 posts

24ff678.jpgThis month’s installment of our IT blogger spotlight series shines on Scott McDermott, who runs Mostly Networks. Scott can also be followed on Twitter, where he goes by @scottm32768.


Check out the Q&A to not only get to know Scott a little better, but also hear his thoughts on everything from SolarWinds Network Performance Monitor (NPM) to the impact of major trends such as IT convergence and SDN.


SW: So Scott, tell us about yourself.


SM: Well, I’m a network engineer in the public sector. I manage the networks for a small data center and 50 sites. We have public Wi-Fi at all of our locations and that has been a focus of my organization’s services for the last few years. It’s been a good excuse to dive into Wi-Fi more deeply, which I think is my favorite technology to study and work with right now. That said, I started my career as a system administrator and did that for a long time; sometimes I’m also still called on to wear my SysAdmin hat.


SW: How’d you get involved with IT in the first place?


SM: My mother has been training people to use computers for most of her career, so we always had computers in the house. The first computer we had was a TRS-80 Model 1 with a tape drive. It even had the 16KB RAM upgrade! My father is also very technical and has worked with RF and two-way radio communications systems for most of his career. I like that with my Wi-Fi work, I’m sort of combining knowledge I picked up from both parents. All my friends through school were geeks, so obviously we were always playing with computers. In college, it was natural to get a job in the computer lab. I guess it was really just a natural progression for me to end up in IT.


SW: So as a seasoned IT pro with a passion for tech literally flowing through your veins, what are some of the tools you can’t live without?


SM: I have three favorites that pop into mind right away. First is my MacBook, because I really think having a Mac as my primary platform makes me more efficient for the kind of work I’m doing. My favorite hardware is the Fluke OneTouch AT because it can do in-line packet capture with PoE. I’ve found that to be really useful for troubleshooting. It also has some nice features for testing Wi-Fi and wired connections. My current favorite bit of software is Ekahau Site Survey. I’ve been doing a lot of predictive site surveys and it’s really a pleasure to use.


Speaking of things that are a pleasure to use, I like the ease of use of SolarWinds NPM and we use it as our primary monitoring tool. We’ve tried a number of other specialized products for monitoring various components of our IT infrastructure, but we almost always end up adding another SolarWinds product to the underlying backbone platform. SolarWinds just does what we really need without the management overhead.


SW: That fantastic! We’re thrilled to have you as a fan. Diverging from IT for moment, what about when you’re not in front of a computer…what are some of your other interests?


SM: My wife and I are big fans of NASCAR, so following the races is one our favorite things. We also enjoy geocaching, which often results in camping and/or hiking. The kids are sometimes less into the hiking bit, but we’ve found going for a geocache turns it into an adventure. It’s a good excuse to get outside and away from the computers!


SW: I guess that brings us to Mostly Networks. How did it come about?


SM: I had thought about blogging for a while, but didn’t think I had anything to add. I finally started Mostly Networks after becoming involved in the network engineering community on Twitter. Many of the others there were blogging and encouraging others to do so as well. It seemed like a good way to give back to the community that I had found helpful. With that in mind, I most enjoy writing about the things I’ve been working on at the office, and the most rewarding posts are those where I solved a problem for myself and it ended up being useful to others.


SW: Outside of Mostly Networks, what other blogs or sites do you keep up with?

SM: Since I’ve been doing a lot of wireless work, WirelessLAN Professionals and No Strings Attached Show are a couple I follow closely. Packet Pushers is a site every network engineer should be following. I also enjoy Tom Hollingsworth’s posts at Networking Nerd.


SW: OK, time to put on your philosopher’s hat for our last question. What are the most significant trends you’re seeing right now and what are the implications for IT as a whole?


SM: The breaking down of silos due to the convergence of, well, everything is huge. The system, network and storage teams really need to start communicating. If yours haven’t already, your organization is behind. IT workers who are in their own silos need to start branching out to have at least some familiarity with the other infrastructure components. The days of being a pure specialist are going away and we will all be expected to be generalists with specialties.


Specifically in the networking space, SDN is picking up steam and looks to be the driver that will get networking to the same level of automation that the system teams already have. Networking has always been slow to automate, for a variety of both good and bad reasons, but automation is coming and we will all be better off for it!

We know there are many organizations out there who do no asset inventory at all, maybe short of slapping an organizational serial number tag on the notebook and noting where it went.


But think about the next time you have to replace legacy systems. For example, say you need to replace highly inefficient power supplies that are drawing hundreds of megawatts of power. With new systems, you could achieve 10x the capacity but draw only a quarter of the power load. This could potentially be a self-funding project just in the electrical and cooling savings alone! But not so fast, says your budget approver. Without proper inventory of those legacy systems, alongside your hardware warranty data, it’s near impossible to make a tangible case to your budget approver.


OK, so hopefully your systems have energy-efficient power supplies but you get the point: server and IT asset management is a must. It provides you the means to achieve complete visibility into your infrastructure inventory, helping you gain an in-depth understanding of:

  • Where servers and other hardware exist
  • Where components reside
  • How they are used
  • What they cost
  • When they were added to the inventory
  • Is there an expiry date for warranties and upgrades
  • How they impact IT and business services


Having this level of visibility into server performance helps SysAdmins improve infrastructure efficiency and performance, doing year-end budgeting, show how existing server hardware assets yielded in a strong ROI, or planning and forecasting. That conversation about budgeting for that next hardware upgrade just got a lot easier.


But let’s say you’re one of the few organizations that has an extensive Excel worksheet that captures all relevant information as the system is first unboxed. You’re not encountering any pain points so far, but no doubt, they’re surely to be on the way.


In the beginning, it’s a cheap and straightforward answer to having visibility into your inventory. But suddenly, more people are hired, or you’re implementing cloud-based services, or department ‘X’ wants to upgrade/replace their legacy business application; all of this requires additional equipment or the upgrade/replacement of existing equipment. As your IT organization grows, your spreadsheet exponentially grows into a big, hairy mess of manual tracking, taking more and more effort to maintain and falling further and further down your list of priorities.


So we want to know, where you do fall in the spectrum of asset management? At what point is something like a tool for automated asset management warranted? And when does a simple spreadsheet do the trick? And at what point is that spreadsheet doomed to inefficiency?

Metrics and measurements are incredibly important to businesses these days. Every second matters, and every dollar matters.  Metrics have always been a topic of discussion, debate, frustration and conversation in the world of IT and technical support, but now shifts in the worlds of technology and business are having effects on what to measure and how to measure.

For years, help desks and service desks measured themselves based on the historical method of contact: Phone. Metrics such as speed to answer, average handle time, time in queue, agent utilization rate, and abandon rate have been on the top of most support center managers’ lists for years.  Technology made these measures increasingly easy. Automatic call distributors logged the number of calls, the wait time, the number of abandoned calls, and the time agents/analysts spend ready to pick up the phone, and produced reports for managers to help them calculate the operational metrics they needed to properly staff and run support centers. First call resolution (FCR; resolving a ticket on the first call, even if there was a “warm transfer”) became king-of-the-hill. “One and done” has been spoken millions of times by thousands of support managers.  As HDI’s in-house subject matter expert, I’m asked about FCR more than any other single topic.

But there’s trouble here.

First of all, phone is slowly declining as a contact method as other channels have come into play. For the HDI 2014 Support Center Practices & Salary Report (due out in October, 2014), we asked about phone, chat, email, web request (tickets submitted directly be the end users), autologging (tickets created without human intervention), social media, walkup, text (SMS), mobile app and fax. (Yes, fax! It’s still supported as a channel in more than 8% of organizations.)

This channel explosion has created puzzles for support center managers. The ACD is no longer  providing enough information to them for staffing, and many of them are scrambling to fit channels like email into the old telephone mold: “What’s the email equivalent of first call resolution?”

From my vantage point, I can see their frustration, but also think that there needs to be some serious adjustment. Metrics that were very useful in the past are no longer the keys to effective, efficient support.

Instead of focusing metrics on ourselves, we need to be looking at our customers and determining what is valuable to them. In another recent report, HDI found that 85% of IT departments—and 87% of support centers—are feeling pressure from their businesses to show value, not just completion of work and efficiency.

Let’s take a look at that formerly paramount metric, first call resolution, and see what it tells us about business value.

  • Most commonly, “one and done” calls relate to issues that are known
    • User calls
    • Analyst checks knowledge base
    • Analyst tells user the solution
    • Incident is resolved
  • The most common FCR call is password reset
    • 30-35% of all calls to the support center are password-related
    • Putting in a self-service password reset tool may drive these calls up because the tool doesn’t work with all passwords users need

So here’s what the support industry has been hanging its hat on: Incidents that have known solutions and password resets that are required because the IT environment is too complex. It’s not really any wonder that many organizations are trying to figure out whether they need a support center at all. (They still do—in most cases—by the way.)


  1. Push as much repetitive work out to self-service as possible. Provide the solutions to common issues in a good, easy-to-understand self-service system. And yes, your customer will use it, and yes, they will thank you for getting them off the phone queue.
  2. Move more technical work to the front line. This is commonly called “Shift-Left” and it works. Use humans for problem solving and assistance, not reading answers to end users.
  3. Start measuring things that show value to your business, such as interrupted user minutes (IUM: number of minutes of interruption X number of users affected).
  4. Start using solutions that are as simple as possible. Software that does not integrate with your organization’s other tools—such as password reset—do not fit your basic requirements and should not be considered.
  5. Use good knowledge management practice. Share what you know and keep it up to date.  Everyone benefits.

You can help that confused support center manager by working together to cut complexity and provide solutions to the customer: Your business.

Back when I used to be an on-call DBA, I got paged one morning for a database server having high CPU utilization. After I punched the guy who setup that alert, I brought it up in a team meeting—is this something we should even be reporting on, much less alerting on. Queries and other processes in our use CPU cycles, but frequently as a production DBA you are the mercy of some third party applications “interesting” coding decisions causing more CPU cycles than is optimal.


Some things in queries that can really hammer CPUs are:

  • Data type conversions
  • Overuse of functions—or using them in a row by row fashion
  • Fragmented indexes or file systems
  • Out of date database statistics
  • Poor use of parallelism


Most commercial databases are licensed by the core—so we are talking about money here. Also, with virtualization, we have more options around easily changing CPU configurations, but remember overallocating CPUs on a virtual machine leads to less than optimal performance.  At the same time CPUs are a server’s ultimate limiter on throughput—if your CPUs are pegged you are not going to get any more work done.


The other angle to this, is since you are paying for your databases by the CPU, you want to utilize them. So there is a happy medium of adjusting and tuning.

Do you capture CPU usage over time? What have you done to tune queries for CPU use?


Log aggregation

Posted by mellowd Nov 2, 2014

Way back in the past I used to view logs after an event has happened. This was painfully slow, especially when viewing the logs of many systems at the same time.


Recently I've been a big fan of log aggregators. On the backend it's a standard log server, while all the new intelligence is on the front end.


One of the best uses of this in my experience is seeing what events have occurred and which users have made changed just before. Most errors I've seen are human error. Someone has either fat fingered something or failed to take into account all the variables or effects their change could have. The aggregator can very quickly show you that x amount of routers have OSPF flapping, and that x user just made a change 5 minutes ago.


What kind of intelligent systems are you using on your logs? Do you use external tools, or perhaps home grown tools to run through your logs and pull relevant information and inform you? Or, do you simply use logs as a generic log in the background to only go through when something goes wrong?


The Importance of Standards

Posted by jdanton Oct 27, 2014

Throughout my career I have worked on a number of projects where standards where minimal. You have seen this type of shop—the servers may be named after superheros or rock bands. When the company has ten servers and two employees it’s not that big of a deal. However, when you scale up and suddenly become even a small enterprise, these things can become hugely important.


A good example is having the same file system layout for your database servers—without this it becomes hugely challenging to automate the RDBMS installation process. Do you really want to spend your time clicking next 10 times every time you need a new server?


One of my current projects has some issues with this—it is a massive virtualization effort, but the inconsistency of the installations, not following industry best practices, and the lack of common standards across the enterprise have led to many challenges in the migration process. Some of these challenges include inconsistent file system names, and even hard coded server names in application code. I did a very similar project at one of my former employers who had outstanding standards and everything went as smoothly as possible. 


What standards do you like to enforce? The big ones for me are file system layout (e.g. having data files and transaction/redo logs on the same volume every time, whether it is D:\ and L:\, or /data and /log) and server naming (having clearly defined names makes server location and identification easier). Some other standards I’ve worked with in the past include how to perform disaster recovery for different tiers of apps or which tier of application is eligible for high availability solutions.


Online logging

Posted by mellowd Oct 27, 2014

There are a number of companies doing log analysis in 'the cloud' - What do people think of the security implications of this?


Your logs that are uploaded are generally inside some sort of private container, however there have been a number of high profile security concerns. This includes holes in regular open-source software as well as lax security by companies providing cloud services.


If you're uploading security logs to a remote system, and that system is compromised, you're essentially giving a blueprint for how to get into your network for those who now have your logs.


What's the best strategy for this? I have a few, each with advantages and disadvantages:

  • Never use one of these services - Keep it all in house, though you lose a ton of the analytics they provide unless you've got developers inhouse to do this.
  • Filter what you upload -  This gives a broken picture. Partial logs don't mean much and it will be difficult to figure out what you should be filtering.
  • Put your trust in them -  Famous last words? I err on the side of caution and trust no-one.


Each of these has advantages and disadvantages and I'm eager to see what others feel.


DPI - A True Love Story

Posted by emilie Oct 21, 2014

Esteemed Geek Speak readers -


Recently, a valued customer shared the following story with us. We thought it was so good, we wanted to share. They asked to remain anonymous but the story remains the same:

Long ago, in a data center far, far away...
It’s the question that every support team dreads
"is it the network or is it the application?" When faced with this challenge recently, we turned to SolarWinds® and the Deep Packet Inspection technology embedded in Network Performance Monitor v11.  During a SWOT analysis of a series of business critical applications it was determined that we needed a way to monitor network traffic focused on applications; a different view than the existing NPM (volume) and NTA (volume by applications between discrete endpoints) could provide.  We needed a way to determine how the application was performing and whether the issue truly was the underlying network or the application itself.

So, where to begin with this mountain of a task?

The challenge of sorting out which components needed attention was further complicated by the lack of tools in the new data center.  Neither a dedicated WireShark® server nor a standard Gigastore appliance was in place to help gather packets for analysis.  Enter SolarWinds DPI. Drawing the network, server, virtualization, monitoring and data center teams together, a plan to build and deploy a DPI NPAS (network packet analysis sensor) server in the remote data center was hashed out. Leveraging a Gigamon appliance that would act as the filter for the 10GB links, we built a Windows® 2008 R2 server on top of a VMware® host to house the NPAS agent.  The virtualization team configured host affinity so the guest OS didn’t get vMotioned away from the host with the physical cable in this proof-of-concept build.  With cabling in place, a promiscuous port configured, host affinity set, and a NPAS agent deployed from an additional polling engine that was local to the data center, we were then prepared to start parsing traffic at the application level.

Time was of the essence.

The alternative solution simply required too much lead time for the timeline we had to work with in this troubleshooting effort.  Ordering a Gigastore appliance or building a dedicated WireShark server would have taken weeks from order to delivery and installation.  By leveraging existing hardware and virtual servers, we were able to execute on this proof of concept in under a week.  In spite of having never deployed DPI in a lab environment, not socializing the idea of doing packet-level application monitoring with any of the other support teams, or defining a clear set of objectives, we were able to rapidly go from proposal to deployed solution with minimal effort. 

And on top of all that...

Along the way we discovered the DPI may also save time and money.  While Gigastore and Gigamon appliances are in no danger of being pushed out of core data centers, we’re now considering a Gigamon with DPI to do the ‘app vs network’ discernment for smaller data centers where the interesting traffic is less than 1 Gbps and unique application monitors are less than 50.  The networking team is eyeing DPI in hopes that it can reduce the number of manual packet analysis requests in their future and improve analysis time by targeting the specific part of an application that’s performing poorly. In turn, allowing them to focus on other mission-critical tasks.  


Thank you for sharing your story, IT hero and valued customer (you know who you are).


And for you other heroes out there, we love to hear from you too. Don't ever hesitate to contact any one of us SWI employees. Or just say it loud and proud any where on thwack, particularly on Spread the Word.

In the fifteen plus years that I have in worked in information technology organizations, there has been a lot of change. From Unix to Linux, the evolution of Windows based operating systems, and the move to virtualized servers which led to the advent of cloud computing, it’s been a time of massive change. One of the major changes I’ve seen take place is the arrival of big data solutions.


There are couple of reasons why this change happened (in my opinion) at very large scale, relational database licensing is REALLY expensive. Additionally, as much as IT managers like to proclaim that storage is cheap—enterprise class storage is still very expensive. Most of the companies that initiated the movement to systems such as Hadoop and its ecosystem, were startups where capital was low, and programming talent was abundant, so have to do additional work to make a system work better was a non-issue.


So like any of the other sea changes that you have seen in industry, you will have to adjust your skills and career focus to align to these new technologies. If you are working on a Linux platform, your skills will likely transfer a little easier than from a Windows platform, but systems are still systems. What are you doing to match your skillset to the changing world of data?

When saving logs I like to have as verbose data as possible to be stored. However when viewing a log I may only be looking at specific parts of that log. Another concern is if I need to give my logs to a third party and I don't want to reveal certain information to that 3rd party. I'll go over a couple of things that I use on a day to day basis. Note that entire books have been written about SED and AWK so my use of them is very limited compared to what could be done.



The way I use SED is very similar to VI's search and replace tool. A good example of this is that my blog (http://www.mellowd.co.uk/ccie/) sits behind a reverse proxy. I have an IPTables rules that logs any blocked traffic. Now I'd like to share my deny logs, but I don't want you to see my actual server IP, I only want you to see my reverse proxy IP. If I just showed you my raw logs, you'd see my actual IP address. By grepping this log through sed, I can change it on the fly. The format to do so is sed s/<source pattern>/<destination pattern>/

I've used sed to change my IP to and now can happily show the logs. Note this is done in real-time so piping tail through sed is possible:

Oct 19 14:43:11 mellowd kernel: [10084876.715244] IPTables Packet Dropped: IN=venet0 OUT= MAC= SRC= DST= LEN=118 TOS=0x00 PREC=0x00 TTL=53 ID=0 DF PROTO=UDP SPT=34299 DPT=1900 LEN=98

Oct 19 14:49:33 mellowd kernel: [10085258.251596] IPTables Packet Dropped: IN=venet0 OUT= MAC= SRC= DST= LEN=434 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=UDP SPT=5063 DPT=5060 LEN=414

Oct 19 14:52:53 mellowd kernel: [10085458.580901] IPTables Packet Dropped: IN=venet0 OUT= MAC= SRC= DST= LEN=40 TOS=0x00 PREC=0x00 TTL=104 ID=62597 PROTO=TCP SPT=6000 DPT=3128 WINDOW=16384 RES=0x00 SYN URGP=0



AWK is very handy to get only the information you require to show. In the above example there is a lot of information that I might now want to know about. Maybe I'm only interested in the date, time, and source IP address. I don't care about the rest. I can pipe the same tail command I used above through awk and get it to show me only the fields I care about. By default, awk uses the space as the field separation character and then each field is numbered sequentially. The format for this is awk '{ print <fields you want to see> }'


I'll now simply cat the syslog file, and use awk to show me what I want to see:

sudo cat /var/log/syslog |  grep IPTables | awk '{ print $1" "$2" "$3"\t"$13 }'

Oct 19 14:27:47    SRC=

Oct 19 14:29:04    SRC=

Oct 19 14:37:06    SRC=

Oct 19 14:40:32    SRC=

Oct 19 14:40:32    SRC=

Oct 19 14:41:36    SRC=

Oct 19 14:43:11    SRC=

Oct 19 14:49:33    SRC=

Oct 19 14:52:53    SRC=

Oct 19 14:54:50    SRC=

Oct 19 14:58:04    SRC=

Oct 19 15:01:49    SRC=

Oct 19 15:05:38    SRC=

Oct 19 15:06:48    SRC=

Oct 19 15:07:35    SRC=

Oct 19 15:13:42    SRC=

I've included spaces and a tab character between the fields to ensure I get the output looking as I want it. If you count the original log you'll see that field 1 = Oct, field 2 = 19, field 3 = the time, and field 13 = SRC=IP

I may not want to see SRC= in the output, so use sed to replace it with nothing:

sudo cat /var/log/syslog |  grep IPTables | awk '{ print $1" "$2" "$3"\t"$13 }' | sed s/SRC=//

Oct 19 14:27:47

Oct 19 14:29:04

Oct 19 14:37:06

Oct 19 14:40:32

Oct 19 14:40:32

Oct 19 14:41:36

Oct 19 14:43:11

Oct 19 14:49:33

Oct 19 14:52:53

Oct 19 14:54:50

Oct 19 14:58:04

Oct 19 15:01:49

Oct 19 15:05:38

Oct 19 15:06:48

Oct 19 15:07:35

Oct 19 15:13:42


I'm eager to see any other handy sed/awk/grep commands you use in a similar scope to the ones I use above.

The terminology of 'parent-child' applies well in an IT service support environment where support teams try to identify specific service requests as incident and problem tickets. While both incidents and problems are basically disruptions caused in IT service, in ITIL, these are defined and dealt with differently.


Problem management differs from incident management thus:

  • The principal purpose of problem management is to find and resolve the root cause of a problem and thus prevent further incidents.
  • The purpose of incident management is to return the service to normal level as soon as possible, with smallest possible business impact.


A service desk allows IT teams to tag the service requests as incidents and problems. Typically, there comes a situation when multiple incidents need to be linked to a single problem – especially when incidents raise problems. For example, when there is a server crash after office hours, it can be treated as an incident and just fixed to get the server running again. When there are multiple such incidents happening at various times, and that happens during business hours, it has a greater business impact. Now, this can be considered a problem ticket and needs to be investigated, root-cause analyzed, and solved quickly. When a service desk enables you to link incidents to problems, it becomes easier for further analysis and issue resolution. Also, when it comes to closing tickets, one can just close the problem ticket which will automatically close all its incident ticket subsets.


The importance of parent-child relationship is also seen in IT asset management (ITAM) where multiple child assets can be linked to a parent asset. For example, a computer can be the parent, and its hardware peripherals can be the child assets. This is specifically useful for inventory management and easier understanding of asset assignment to clients.


SolarWinds Web Help Desk® supports both these scenarios: parent-child relation for ticketing management and IT asset management. Learn more about Web Help Desk »



My post, Tips for Mapping Your IP address Space, included some ideas on how to effectively map IP your address space to see the IP address usage in your network. Now, we will look at effectively managing your IPv6 address space. The rapid depletion of IPv4 address space and increased difficulty in accessing remaining IPv4 resources has prompted network administrators to seriously examine planning, allocation, and management strategies for IPv6 addresses.

For your company to continue achieving business and growth goals, you should start now to prepare your organization’s network to support IPv6—fully or in dual-stack with IPv4.

Benefits of Upgrading to IPv6

Your network infrastructure has a direct impact on your company’s business objectives and growth strategies. The loss of productivity due to network downtime can be costly. Knowing what IPv6 devices are running and how the address space is being used in your network greatly improves your ability to troubleshoot issues and accommodate network growth. The benefits of upgrading to IPv6 include:

  • Reduces network complexity and the need to deal with searching for IP information when troubleshooting
  • Heightens your ability to make informed decisions using reliable IP documentation
  • Helps simplify the move from dual-stack support with IPv4 to fully supporting IPv6

IPv6 Management

Some the major tasks associated with managing IPv6 are:

  • Tracking and allocating IPv6 addresses
  • Identifying which IPv6 addresses are in use
  • Tracking and managing dual-stack hosts

Tracking and Allocating IPv6 Addresses

One of the key differences between IPv4 and IPv6 is IPv6 has much larger address space and subnet sizes. The first step to tracking and allocating your IPv6 address space is to see which devices on your network are running IPv6. You can use spreadsheets for this process, but because IPv6 addresses are longer, more complex, and difficult to remember, you increase your risk of making errors when you update the IPv6 data in your spreadsheets. In addition, the dynamic assignment of addresses through DHCP makes it difficult to keep IP documentation up-to-date. It’s much easier to map your IPv4 and IPv6 addresses to a single system for tracking. 

Identifying Which IPv6 Addresses are in Use

You can track IP address usage directly from routers. Also, periodically scanning your network to identify and update IP usage documentation helps you maintain a more organized and accurate inventory.

There are tools that enable you to automate tracking and scanning processes and most of them use a variety of scanning techniques like neighbor, ICMP, and SNMP scanning. Each technique has strengths and weaknesses, so to get an accurate, real-time visual of your IP space, it’s better to use a combination of all these techniques.

Your IP management system should be able to provide historical data on IP usage. This information is very useful in tracking IP address consumption over time and performing long-term capacity planning.

Tracking and Managing Dual-Stack Hosts

Track and manage your existing IPv4 space alongside your IPv6 addresses. This helps you with your overall IP management as well as planning for complete migration to IPv6 in the future. To improve overall efficiency of managing IP addresses in your network, choose an IP address management system that provides you with a management layer that spans the entire organization/network. In other words, consolidate your entire IP management along with your DHCP and DNS administration into one system.

Top 3 Tips for Managing IPv6 in Your Network

  • Automate network subnet and device scanning to build accurate address space maps and available IP’s within an address block
  • Integrate DHCP, DNS administration for both IPv4 and IPv6 address space to reduce management effort and improve accuracy in provisioning and decommissioning IP’s
  • Ensure IPv6 migration preparedness with accurate IP documentation and ensure that critical resources and operations are working as intended to avoid network downtime


Do you often get complaints from customers about the amount of time it takes to reach a support technician? What steps have you tried to overcome this issue?


  • Round-the-clock customer service
  • More technicians
  • Automation of customer service operations using a help desk or a ticketing tool.


You can improve the turnaround time of customer support calls by providing self-help options. You don’t always need to involve your technicians in every support call or issue that comes through your door. Customer service can also involve providing information to customers so they can resolve some issues themselves.


Your support department likely receives a number of repeated requests for common issues. You can handle these requests like all the others you receive, or you can set up self-help channels that end-users can explore to resolve issues on their own. Self-help channels can be in the form of a knowledge base or even an online forum/community.


  • Knowledge Base (KB): You can compile the common and repetitive issues that end-users experience and document the issues’ descriptions and resolution steps. You configure your help desk software to recognize these issues and direct end-users to your knowledge base as an initial support option. A knowledge base is also a valuable resource for training and educating new help desk technicians.


  • Online Forum/Community: An online forum is where IT pros can share their experiences, queries, issues, and feedback. Technicians can use the forum to interact with customers and invite members to offer their own tips and suggestions for resolving issues.


The knowledge base and online community are useful tools for assisting end-users with common issues that don’t require extensive troubleshooting and technician assistance. Other self-help channels include:


  • Interactive Voice Response (IVR): An automated telephony system that interacts with callers to gather information and route calls to the appropriate recipients. IVRs are designed to accommodate certain types of caller interactions, but they can speed up support calls by routing the caller to the right staff members/technicians faster.


  • Web Portal: A self-managed account where customers can log in and retrieve information from various sources like commerce functionalities, support ticket updates, new products, discounts etc.


  • Proactive Engagement: A method of giving customers information they need before they contact customer service (bill notifications/details, balance updates, insurance updates and expiration dates, etc.) Organizations can use this method to both communicate important information to customers as well as encourage customers to use their products.


Excellent customer service fosters customer loyalty, strengthens your reputation in your industry, and leads to continual business success. Incorporating self-help channels enables you to elevate the level of your customer satisfaction by moving your customers from issue to resolution faster.

In Fundamentals of VoIP Monitoring & Troubleshooting Part 1 we spoke about the difficulties with reactively troubleshooting VoIP related problems and how Call Detail Records (CDRs) can be used to fill the gap in time. When an end user experiences a problem it can be mins, hours and even days before you’re notified and the troubleshooting process begins. Visibility into what took place during that call is paramount and the metrics gathered from CDR’s can help.

Let’s start with the following call quality metrics:

Network Jitter: Real-time voice communications over the network are sensitive to delay in packet arrival time or packets arriving out of sequence. Excess jitter results in calls breaking up. Jitter can be reduced to a certain extent by using jitter buffers. Jitter buffers are small buffers that cache packets and provide them to the receiver in sequence and evenly spaced for proper playback. Buffer lengths can be modified; however, if jitter buffer is increased too much then the call will experience an unacceptable delay. Consequently, a reduction in buffer turns results in less delay but more packet loss. Jitter is measured in milliseconds (ms).


Latency: Latency, or lag, is the time delay caused in the transmission of a voice packet. Excess latency results in delay, packet drops, and finally to echo. Latency is measured in milliseconds (ms)


Packet Loss: Packet loss occurs when one or more packets of data fail to reach their destination. A single packet loss is referred to as “packet gap”, and series of packet loss is known as” burst”. Packet loss can occur for a variety of reasons including link failure, high congestion levels, misrouted packets, buffer overflows and a number of other factors. Packet loss causes interrupted playback and degradation in voice quality. Packet loss can be controlled using packet loss concealment techniques within the playback codec.


MOS: Mean Opinion Score is a numerical value to indicate the perceived quality of the call from the user’s perspective of the received call after compression, transmission, and decompression. MOS is a calculation based on the performance of the IP network and is defined in the ITU-T PESQ P.862 standard and is expressed as a single number in the range of 1 to 5, where 1 is lowest perceived quality and 5 is the highest. The above metrics are important to monitor and control in order to keep call quality at an acceptable level. It’s also important to note that the above metrics can vary depending where they’re captured. As a best practice it’s a good idea monitor these metrics end to end within your VoIP network. In our next post we’ll talk about how you can capture these statistics from the perspective that matters most – between two VoIP phones after a failed call. 


For more information on monitoring and troubleshooting VoIP please read our white paper here.


Log time lengths

Posted by mellowd Oct 13, 2014

How long do you keep your logs for? The answer can vary wildly depending on the industry you work for. As an example, most VPN providers specifically note that they do not hold logs, so even if a government requested certain logs, they would not have them. The logs they don’t keep are likely to be only user access logs. They’ll still have internal system logs.


Ultimately keeping logs for long is of little benefit unless there is a security reason to do so. The recent shellshock bug is a great example of when older logs can be useful. You may have a honeypot out in the wild and once a known issue comes to the fore, scan your logs to see if this particular bug has been exploited before it was well known.


Country and industry regulations will also influence the amount of times logs are kept. Many countries require that documentation and logging data be kept for a certain amount of years for any number of reasons.


I’m interested to know how long you keep logs for. What particular logs as well as why that length of time was chosen.

Filter Blog

By date:
By tag: