Throughout my career I have worked on a number of projects where standards where minimal. You have seen this type of shop—the servers may be named after superheros or rock bands. When the company has ten servers and two employees it’s not that big of a deal. However, when you scale up and suddenly become even a small enterprise, these things can become hugely important.


A good example is having the same file system layout for your database servers—without this it becomes hugely challenging to automate the RDBMS installation process. Do you really want to spend your time clicking next 10 times every time you need a new server?


One of my current projects has some issues with this—it is a massive virtualization effort, but the inconsistency of the installations, not following industry best practices, and the lack of common standards across the enterprise have led to many challenges in the migration process. Some of these challenges include inconsistent file system names, and even hard coded server names in application code. I did a very similar project at one of my former employers who had outstanding standards and everything went as smoothly as possible. 


What standards do you like to enforce? The big ones for me are file system layout (e.g. having data files and transaction/redo logs on the same volume every time, whether it is D:\ and L:\, or /data and /log) and server naming (having clearly defined names makes server location and identification easier). Some other standards I’ve worked with in the past include how to perform disaster recovery for different tiers of apps or which tier of application is eligible for high availability solutions.


Online logging

Posted by mellowd Oct 27, 2014

There are a number of companies doing log analysis in 'the cloud' - What do people think of the security implications of this?


Your logs that are uploaded are generally inside some sort of private container, however there have been a number of high profile security concerns. This includes holes in regular open-source software as well as lax security by companies providing cloud services.


If you're uploading security logs to a remote system, and that system is compromised, you're essentially giving a blueprint for how to get into your network for those who now have your logs.


What's the best strategy for this? I have a few, each with advantages and disadvantages:

  • Never use one of these services - Keep it all in house, though you lose a ton of the analytics they provide unless you've got developers inhouse to do this.
  • Filter what you upload -  This gives a broken picture. Partial logs don't mean much and it will be difficult to figure out what you should be filtering.
  • Put your trust in them -  Famous last words? I err on the side of caution and trust no-one.


Each of these has advantages and disadvantages and I'm eager to see what others feel.


DPI - A True Love Story

Posted by emilie Oct 21, 2014

Esteemed Geek Speak readers -


Recently, a valued customer shared the following story with us. We thought it was so good, we wanted to share. They asked to remain anonymous but the story remains the same:

Long ago, in a data center far, far away...
It’s the question that every support team dreads
"is it the network or is it the application?" When faced with this challenge recently, we turned to SolarWinds® and the Deep Packet Inspection technology embedded in Network Performance Monitor v11.  During a SWOT analysis of a series of business critical applications it was determined that we needed a way to monitor network traffic focused on applications; a different view than the existing NPM (volume) and NTA (volume by applications between discrete endpoints) could provide.  We needed a way to determine how the application was performing and whether the issue truly was the underlying network or the application itself.

So, where to begin with this mountain of a task?

The challenge of sorting out which components needed attention was further complicated by the lack of tools in the new data center.  Neither a dedicated WireShark® server nor a standard Gigastore appliance was in place to help gather packets for analysis.  Enter SolarWinds DPI. Drawing the network, server, virtualization, monitoring and data center teams together, a plan to build and deploy a DPI NPAS (network packet analysis sensor) server in the remote data center was hashed out. Leveraging a Gigamon appliance that would act as the filter for the 10GB links, we built a Windows® 2008 R2 server on top of a VMware® host to house the NPAS agent.  The virtualization team configured host affinity so the guest OS didn’t get vMotioned away from the host with the physical cable in this proof-of-concept build.  With cabling in place, a promiscuous port configured, host affinity set, and a NPAS agent deployed from an additional polling engine that was local to the data center, we were then prepared to start parsing traffic at the application level.

Time was of the essence.

The alternative solution simply required too much lead time for the timeline we had to work with in this troubleshooting effort.  Ordering a Gigastore appliance or building a dedicated WireShark server would have taken weeks from order to delivery and installation.  By leveraging existing hardware and virtual servers, we were able to execute on this proof of concept in under a week.  In spite of having never deployed DPI in a lab environment, not socializing the idea of doing packet-level application monitoring with any of the other support teams, or defining a clear set of objectives, we were able to rapidly go from proposal to deployed solution with minimal effort. 

And on top of all that...

Along the way we discovered the DPI may also save time and money.  While Gigastore and Gigamon appliances are in no danger of being pushed out of core data centers, we’re now considering a Gigamon with DPI to do the ‘app vs network’ discernment for smaller data centers where the interesting traffic is less than 1 Gbps and unique application monitors are less than 50.  The networking team is eyeing DPI in hopes that it can reduce the number of manual packet analysis requests in their future and improve analysis time by targeting the specific part of an application that’s performing poorly. In turn, allowing them to focus on other mission-critical tasks.  


Thank you for sharing your story, IT hero and valued customer (you know who you are).


And for you other heroes out there, we love to hear from you too. Don't ever hesitate to contact any one of us SWI employees. Or just say it loud and proud any where on thwack, particularly on Spread the Word.

In the fifteen plus years that I have in worked in information technology organizations, there has been a lot of change. From Unix to Linux, the evolution of Windows based operating systems, and the move to virtualized servers which led to the advent of cloud computing, it’s been a time of massive change. One of the major changes I’ve seen take place is the arrival of big data solutions.


There are couple of reasons why this change happened (in my opinion) at very large scale, relational database licensing is REALLY expensive. Additionally, as much as IT managers like to proclaim that storage is cheap—enterprise class storage is still very expensive. Most of the companies that initiated the movement to systems such as Hadoop and its ecosystem, were startups where capital was low, and programming talent was abundant, so have to do additional work to make a system work better was a non-issue.


So like any of the other sea changes that you have seen in industry, you will have to adjust your skills and career focus to align to these new technologies. If you are working on a Linux platform, your skills will likely transfer a little easier than from a Windows platform, but systems are still systems. What are you doing to match your skillset to the changing world of data?

When saving logs I like to have as verbose data as possible to be stored. However when viewing a log I may only be looking at specific parts of that log. Another concern is if I need to give my logs to a third party and I don't want to reveal certain information to that 3rd party. I'll go over a couple of things that I use on a day to day basis. Note that entire books have been written about SED and AWK so my use of them is very limited compared to what could be done.



The way I use SED is very similar to VI's search and replace tool. A good example of this is that my blog ( sits behind a reverse proxy. I have an IPTables rules that logs any blocked traffic. Now I'd like to share my deny logs, but I don't want you to see my actual server IP, I only want you to see my reverse proxy IP. If I just showed you my raw logs, you'd see my actual IP address. By grepping this log through sed, I can change it on the fly. The format to do so is sed s/<source pattern>/<destination pattern>/

I've used sed to change my IP to and now can happily show the logs. Note this is done in real-time so piping tail through sed is possible:

Oct 19 14:43:11 mellowd kernel: [10084876.715244] IPTables Packet Dropped: IN=venet0 OUT= MAC= SRC= DST= LEN=118 TOS=0x00 PREC=0x00 TTL=53 ID=0 DF PROTO=UDP SPT=34299 DPT=1900 LEN=98

Oct 19 14:49:33 mellowd kernel: [10085258.251596] IPTables Packet Dropped: IN=venet0 OUT= MAC= SRC= DST= LEN=434 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=UDP SPT=5063 DPT=5060 LEN=414

Oct 19 14:52:53 mellowd kernel: [10085458.580901] IPTables Packet Dropped: IN=venet0 OUT= MAC= SRC= DST= LEN=40 TOS=0x00 PREC=0x00 TTL=104 ID=62597 PROTO=TCP SPT=6000 DPT=3128 WINDOW=16384 RES=0x00 SYN URGP=0



AWK is very handy to get only the information you require to show. In the above example there is a lot of information that I might now want to know about. Maybe I'm only interested in the date, time, and source IP address. I don't care about the rest. I can pipe the same tail command I used above through awk and get it to show me only the fields I care about. By default, awk uses the space as the field separation character and then each field is numbered sequentially. The format for this is awk '{ print <fields you want to see> }'


I'll now simply cat the syslog file, and use awk to show me what I want to see:

sudo cat /var/log/syslog |  grep IPTables | awk '{ print $1" "$2" "$3"\t"$13 }'

Oct 19 14:27:47    SRC=

Oct 19 14:29:04    SRC=

Oct 19 14:37:06    SRC=

Oct 19 14:40:32    SRC=

Oct 19 14:40:32    SRC=

Oct 19 14:41:36    SRC=

Oct 19 14:43:11    SRC=

Oct 19 14:49:33    SRC=

Oct 19 14:52:53    SRC=

Oct 19 14:54:50    SRC=

Oct 19 14:58:04    SRC=

Oct 19 15:01:49    SRC=

Oct 19 15:05:38    SRC=

Oct 19 15:06:48    SRC=

Oct 19 15:07:35    SRC=

Oct 19 15:13:42    SRC=

I've included spaces and a tab character between the fields to ensure I get the output looking as I want it. If you count the original log you'll see that field 1 = Oct, field 2 = 19, field 3 = the time, and field 13 = SRC=IP

I may not want to see SRC= in the output, so use sed to replace it with nothing:

sudo cat /var/log/syslog |  grep IPTables | awk '{ print $1" "$2" "$3"\t"$13 }' | sed s/SRC=//

Oct 19 14:27:47

Oct 19 14:29:04

Oct 19 14:37:06

Oct 19 14:40:32

Oct 19 14:40:32

Oct 19 14:41:36

Oct 19 14:43:11

Oct 19 14:49:33

Oct 19 14:52:53

Oct 19 14:54:50

Oct 19 14:58:04

Oct 19 15:01:49

Oct 19 15:05:38

Oct 19 15:06:48

Oct 19 15:07:35

Oct 19 15:13:42


I'm eager to see any other handy sed/awk/grep commands you use in a similar scope to the ones I use above.

The terminology of 'parent-child' applies well in an IT service support environment where support teams try to identify specific service requests as incident and problem tickets. While both incidents and problems are basically disruptions caused in IT service, in ITIL, these are defined and dealt with differently.


Problem management differs from incident management thus:

  • The principal purpose of problem management is to find and resolve the root cause of a problem and thus prevent further incidents.
  • The purpose of incident management is to return the service to normal level as soon as possible, with smallest possible business impact.


A service desk allows IT teams to tag the service requests as incidents and problems. Typically, there comes a situation when multiple incidents need to be linked to a single problem – especially when incidents raise problems. For example, when there is a server crash after office hours, it can be treated as an incident and just fixed to get the server running again. When there are multiple such incidents happening at various times, and that happens during business hours, it has a greater business impact. Now, this can be considered a problem ticket and needs to be investigated, root-cause analyzed, and solved quickly. When a service desk enables you to link incidents to problems, it becomes easier for further analysis and issue resolution. Also, when it comes to closing tickets, one can just close the problem ticket which will automatically close all its incident ticket subsets.


The importance of parent-child relationship is also seen in IT asset management (ITAM) where multiple child assets can be linked to a parent asset. For example, a computer can be the parent, and its hardware peripherals can be the child assets. This is specifically useful for inventory management and easier understanding of asset assignment to clients.


SolarWinds Web Help Desk® supports both these scenarios: parent-child relation for ticketing management and IT asset management. Learn more about Web Help Desk »



My post, Tips for Mapping Your IP address Space, included some ideas on how to effectively map IP your address space to see the IP address usage in your network. Now, we will look at effectively managing your IPv6 address space. The rapid depletion of IPv4 address space and increased difficulty in accessing remaining IPv4 resources has prompted network administrators to seriously examine planning, allocation, and management strategies for IPv6 addresses.

For your company to continue achieving business and growth goals, you should start now to prepare your organization’s network to support IPv6—fully or in dual-stack with IPv4.

Benefits of Upgrading to IPv6

Your network infrastructure has a direct impact on your company’s business objectives and growth strategies. The loss of productivity due to network downtime can be costly. Knowing what IPv6 devices are running and how the address space is being used in your network greatly improves your ability to troubleshoot issues and accommodate network growth. The benefits of upgrading to IPv6 include:

  • Reduces network complexity and the need to deal with searching for IP information when troubleshooting
  • Heightens your ability to make informed decisions using reliable IP documentation
  • Helps simplify the move from dual-stack support with IPv4 to fully supporting IPv6

IPv6 Management

Some the major tasks associated with managing IPv6 are:

  • Tracking and allocating IPv6 addresses
  • Identifying which IPv6 addresses are in use
  • Tracking and managing dual-stack hosts

Tracking and Allocating IPv6 Addresses

One of the key differences between IPv4 and IPv6 is IPv6 has much larger address space and subnet sizes. The first step to tracking and allocating your IPv6 address space is to see which devices on your network are running IPv6. You can use spreadsheets for this process, but because IPv6 addresses are longer, more complex, and difficult to remember, you increase your risk of making errors when you update the IPv6 data in your spreadsheets. In addition, the dynamic assignment of addresses through DHCP makes it difficult to keep IP documentation up-to-date. It’s much easier to map your IPv4 and IPv6 addresses to a single system for tracking. 

Identifying Which IPv6 Addresses are in Use

You can track IP address usage directly from routers. Also, periodically scanning your network to identify and update IP usage documentation helps you maintain a more organized and accurate inventory.

There are tools that enable you to automate tracking and scanning processes and most of them use a variety of scanning techniques like neighbor, ICMP, and SNMP scanning. Each technique has strengths and weaknesses, so to get an accurate, real-time visual of your IP space, it’s better to use a combination of all these techniques.

Your IP management system should be able to provide historical data on IP usage. This information is very useful in tracking IP address consumption over time and performing long-term capacity planning.

Tracking and Managing Dual-Stack Hosts

Track and manage your existing IPv4 space alongside your IPv6 addresses. This helps you with your overall IP management as well as planning for complete migration to IPv6 in the future. To improve overall efficiency of managing IP addresses in your network, choose an IP address management system that provides you with a management layer that spans the entire organization/network. In other words, consolidate your entire IP management along with your DHCP and DNS administration into one system.

Top 3 Tips for Managing IPv6 in Your Network

  • Automate network subnet and device scanning to build accurate address space maps and available IP’s within an address block
  • Integrate DHCP, DNS administration for both IPv4 and IPv6 address space to reduce management effort and improve accuracy in provisioning and decommissioning IP’s
  • Ensure IPv6 migration preparedness with accurate IP documentation and ensure that critical resources and operations are working as intended to avoid network downtime


Do you often get complaints from customers about the amount of time it takes to reach a support technician? What steps have you tried to overcome this issue?


  • Round-the-clock customer service
  • More technicians
  • Automation of customer service operations using a help desk or a ticketing tool.


You can improve the turnaround time of customer support calls by providing self-help options. You don’t always need to involve your technicians in every support call or issue that comes through your door. Customer service can also involve providing information to customers so they can resolve some issues themselves.


Your support department likely receives a number of repeated requests for common issues. You can handle these requests like all the others you receive, or you can set up self-help channels that end-users can explore to resolve issues on their own. Self-help channels can be in the form of a knowledge base or even an online forum/community.


  • Knowledge Base (KB): You can compile the common and repetitive issues that end-users experience and document the issues’ descriptions and resolution steps. You configure your help desk software to recognize these issues and direct end-users to your knowledge base as an initial support option. A knowledge base is also a valuable resource for training and educating new help desk technicians.


  • Online Forum/Community: An online forum is where IT pros can share their experiences, queries, issues, and feedback. Technicians can use the forum to interact with customers and invite members to offer their own tips and suggestions for resolving issues.


The knowledge base and online community are useful tools for assisting end-users with common issues that don’t require extensive troubleshooting and technician assistance. Other self-help channels include:


  • Interactive Voice Response (IVR): An automated telephony system that interacts with callers to gather information and route calls to the appropriate recipients. IVRs are designed to accommodate certain types of caller interactions, but they can speed up support calls by routing the caller to the right staff members/technicians faster.


  • Web Portal: A self-managed account where customers can log in and retrieve information from various sources like commerce functionalities, support ticket updates, new products, discounts etc.


  • Proactive Engagement: A method of giving customers information they need before they contact customer service (bill notifications/details, balance updates, insurance updates and expiration dates, etc.) Organizations can use this method to both communicate important information to customers as well as encourage customers to use their products.


Excellent customer service fosters customer loyalty, strengthens your reputation in your industry, and leads to continual business success. Incorporating self-help channels enables you to elevate the level of your customer satisfaction by moving your customers from issue to resolution faster.

In Fundamentals of VoIP Monitoring & Troubleshooting Part 1 we spoke about the difficulties with reactively troubleshooting VoIP related problems and how Call Detail Records (CDRs) can be used to fill the gap in time. When an end user experiences a problem it can be mins, hours and even days before you’re notified and the troubleshooting process begins. Visibility into what took place during that call is paramount and the metrics gathered from CDR’s can help.

Let’s start with the following call quality metrics:

Network Jitter: Real-time voice communications over the network are sensitive to delay in packet arrival time or packets arriving out of sequence. Excess jitter results in calls breaking up. Jitter can be reduced to a certain extent by using jitter buffers. Jitter buffers are small buffers that cache packets and provide them to the receiver in sequence and evenly spaced for proper playback. Buffer lengths can be modified; however, if jitter buffer is increased too much then the call will experience an unacceptable delay. Consequently, a reduction in buffer turns results in less delay but more packet loss. Jitter is measured in milliseconds (ms).


Latency: Latency, or lag, is the time delay caused in the transmission of a voice packet. Excess latency results in delay, packet drops, and finally to echo. Latency is measured in milliseconds (ms)


Packet Loss: Packet loss occurs when one or more packets of data fail to reach their destination. A single packet loss is referred to as “packet gap”, and series of packet loss is known as” burst”. Packet loss can occur for a variety of reasons including link failure, high congestion levels, misrouted packets, buffer overflows and a number of other factors. Packet loss causes interrupted playback and degradation in voice quality. Packet loss can be controlled using packet loss concealment techniques within the playback codec.


MOS: Mean Opinion Score is a numerical value to indicate the perceived quality of the call from the user’s perspective of the received call after compression, transmission, and decompression. MOS is a calculation based on the performance of the IP network and is defined in the ITU-T PESQ P.862 standard and is expressed as a single number in the range of 1 to 5, where 1 is lowest perceived quality and 5 is the highest. The above metrics are important to monitor and control in order to keep call quality at an acceptable level. It’s also important to note that the above metrics can vary depending where they’re captured. As a best practice it’s a good idea monitor these metrics end to end within your VoIP network. In our next post we’ll talk about how you can capture these statistics from the perspective that matters most – between two VoIP phones after a failed call. 


For more information on monitoring and troubleshooting VoIP please read our white paper here.


Log time lengths

Posted by mellowd Oct 13, 2014

How long do you keep your logs for? The answer can vary wildly depending on the industry you work for. As an example, most VPN providers specifically note that they do not hold logs, so even if a government requested certain logs, they would not have them. The logs they don’t keep are likely to be only user access logs. They’ll still have internal system logs.


Ultimately keeping logs for long is of little benefit unless there is a security reason to do so. The recent shellshock bug is a great example of when older logs can be useful. You may have a honeypot out in the wild and once a known issue comes to the fore, scan your logs to see if this particular bug has been exploited before it was well known.


Country and industry regulations will also influence the amount of times logs are kept. Many countries require that documentation and logging data be kept for a certain amount of years for any number of reasons.


I’m interested to know how long you keep logs for. What particular logs as well as why that length of time was chosen.

I’m in the middle of tough project right now, a large client is trying to convert a large number of physical SQL Servers to virtual machines. They’ve done most of the right things—the underlying infrastructure is really strong, the storage is more than an adequate, and they aren’t overprovisioning the virtual environment.


Where the challenge is coming in is how to convert from physical to virtual. The classical approach, is to build new database VMs, restore backups from the physical to the VM, and ship log files until cutover time. However, in this case there are some application level challenges preventing that approach (mainly heavily customized application tier software). Even so, my preferred method here is to virtualize the system drives, and then restoring the databases using database restore operations.


This ensures the consistency of the databases, and rules out any corruption. Traditional P2V products have challenges around handling the rate of change in databases—many people think read only database workloads don’t generate many writes, but remember you are still writing to a cache, and frequently using temp space. What challenges have you seen in converting from virtual to physical?

As a VoIP Engineer, I’m sure you’ve spent your fair share of time troubleshooting mysterious VoIP outages and call quality issues. You know, the ones that only happen when you’re not around and apparently only strike the most annoying end-user on the floor.

These kinds of problems can be very difficult to troubleshoot and provide root-cause analysis. Typically, troubleshooting the problem begins well after the fact and without any clues to help isolate the problem. This usually results in attempting to reproduce the issue by performing multiple test calls, checking the network for inconsistencies, and interrogating the end-user. If you’re the adventurous type, you’ll even go as far as installing your favorite packet capture software and reading through terabytes of captured network traffic, only to find that what sounded like a network issue is now gone without any trace.


Having a packet capture is very helpful when troubleshooting VoIP problems, but having them in the right place at the right time is a task easier said than done. If only there was a way to go back in time and place your packet capture between the end-user and the switch port prior to the call, surely you would have found the UDP errors or out of sequence packets that caused the garbled call. Unfortunately time travel is not an option, but do not despair. By enabling a couple of features on your Cisco® Call Manager and/or Avaya® Communication Manager, next time you’ll have the clues you need to isolate these problems without the guesswork.


Call Detail Records, CDRs, CMRs, and CQEs have many different names depending on the VoIP platform that you own, but they all provide very similar and useful call statistics. These include origin and destination of the call, starting time of the call, call duration, and termination codes. They also provide more important call quality statistics like jitter, latency, packet loss, and mean opinion score (MOS).


Each one of these metrics can have an effect on a call and, more importantly, can be used to isolate and resolve VoIP issues. In the second post of this four part series, I’ll cover each metric in more detail and explain what you need to look for when analyzing each while troubleshooting.


For more information on SolarWinds VoIP & Network Quality Manager watch our SE Guided Tour here.

The need and quality of your product brings your customers through the door, but it’s your customer service that keeps them coming back. Your customers’ experience with your post-transaction service and support is a big contributor to your organization’s ability to gain customer confidence and loyalty.


Ideally, you want customers to use your products without needing technical assistance. But, with technology products, customer support calls are inevitable. Your value as a company is determined by your customers and their encounters with your organization. This means your customer support methods need to proactively accommodate your customers’ needs and timelines.


Often, customers needing technical assistance are in a time crunch so speedy resolutions are always desirable. So, what can you do on your end to avoid service delays, reduce customers’ wait time, and ensure that customer interactions are positive? Improving performance and productivity starts with organizing your workspace. With the right tools and staff, you can deliver a high standard of customer service that results in high customer satisfaction.


Automation to the Rescue

You likely get the customer-support ball rolling by creating a ticket after the initial service request (i.e. email, phone call, text message). Then there’s triage, routing, escalating, assigning a technician, and notifying the customer that a ticket is created and a resolution is in the works. Where customer service is all about efficiency and quick turn-around times, automating as many of these tasks as you can significantly accelerates these processes and keeps you organized.


Your help desk is staffed with a group of hand-picked, talented technicians with particular skills sets and specialties. This is always useful for the many times you need to re-route or escalate tickets. Your help desk tool also brings a cache of talents to the table. It’s worth your time to configure your help desk tool to recognize the specialties of individual technicians, groups of staff, ticket categories, and priority levels. This cuts down on much of the effort in manually expediting tickets.


Where the Customer Service Rubber Meets the Road

Raise your hand if you’ve ever had a customer call to check the status of an issue. It’s understandable that people don’t like to wait. Drumming fingers on the desk gets old pretty fast. But what’s worse than waiting is not knowing how long the wait will be.


Give your customers the benefit of the human touch by keeping them informed throughout the ticketing process. Over communicating is not annoying when it’s for the benefit of the customer. The majority of positive customer service stories include a testimonial about how he or she was kept in the loop every step of the way.


When you close tickets, make it easy for your customers to give you some feedback and lend their opinions about their customer service experience. Again, this is where your automated help desk processes give you the luxury of effectively interacting and building a rapport with your customers to assure them that their needs are important to you.


The customer relationship is the heart of any business. When a high level of customer satisfaction is your goal, your help desk ticketing processes is how you achieve most of your customer-centric milestones for that goal. 

A recap of the previous month’s notable tech news items, blog posts, white papers, video, events, etc. - For the week ending Tuesday, Sep 30th, 2014.



Did VMware dismiss physical networking?

This week, analysts discuss the future of physical networking, the networking benefits midmarket companies enjoy and security challenges fueled by cloud and mobility applications.


Emerging challenges of maintaining content security

Content security in particular faces novel challenges as mobility and Web2.0 applications make web traffic volumes highly unpredictable, with massive spikes and troughs from one moment to the next.


PEAK IPV4? Global IPv6 traffic is growing, DDoS dying, says Akamai

First time the cache network has seen drop in use of 32-bit-wide IP addresses. Broadband and IPv6 are hot – and distributed denial-of-service attacks and IPv4 are not. Well, that's according to Akamai.


Why Deep Packet Inspection still matters

Deep Packet Inspection (DPI) is a technology that should offer much more weight than SPI (Stateful Packet Inspection).


IPv6 connectivity up; enterprises begin to take note

The 2014 North American IPv6 Summit showcased the growth of the protocol and shed some light on what enterprises are doing to (finally) adopt it.


Blogger Exploits


IPv6 Migration: 10 Reasons To Get Moving

You can't afford to put off that IPv6 transition any longer. Here are 10 ways migrating to the updated protocol will benefit your enterprise.


Aggregation in IPv6 routing curbs effects of Internet growth

IPv6 addresses are four times larger than those based on IPv4, but experts say that doesn't mean IPv6 will slow down routers. In fact, IPv6 routing makes route aggregation simpler, which leads to smaller routing tables.


Network timing: Everything you need to know about NTP

As more applications use IP networks, reliable distribution of authoritative timing information is becoming critical.


IoT Makes You Administrator Of Things

You may not realize it, but it's likely you're already supporting and managing "Internet of Things" devices on your corporate network. Here's a reality check.


Supporting IoT devices requires careful WLAN design

The Internet of Things (IoT) has become defined by the perennial refrigerator that orders more milk when you run out. But in this Q&A, one networking pro explains its role in the enterprise and the issues with designing a WLAN to support IoT devices.


The Hybrid WAN requires a new approach to network optimization

Now that the WAN is finally evolving, I think it’s time to take a look at the infrastructure that’s used to optimize it.


Food for Thought


Using Social Media to Connect with Your Customers - Communicate more openly and often with customers.


10 Pitfalls to Avoid When Implementing a BYOD Program - People naturally gravitate toward devices they know how to use well in order to get a task completed.

Statistics show that IT pro’s spend on average 38 hours each month managing IP addresses (even more for larger networks of 5,000 IP’s or more). More specifically, you spend a considerable amount of your day provisioning changes, monitoring faults, and troubleshooting problems. (SolarWinds® IT Pro Survey)

Your IP management in-basket probably stays pretty full by virtue of the fact that technology doesn’t stand still. Networks grow and manual management methods such as spreadsheets are becoming less practical. As you look toward more advanced DNS, DHCP, and IP address (DDI) management solutions, strongly consider adopting the following DDI best practices for effectively managing both IPv4 and IPv6 address blocks.

Transition to an Automated IP Management System - Automated Scanning & Discovery

The biggest disadvantage of manual IP address management (i.e. spreadsheets and management system databases) is the amount of time required to keep your documentation updated, which takes a toll on overall productivity. Manual IP management methods:

  • Have no change management system.
  • Inhibit IT team collaboration.
  • Are difficult to implement, oversee, and update.
  • Require more time and effort for troubleshooting.

An automated system saves time, reduces risk of human errors, and takes the guess work out of your IP address space management. An automated tool automatically scans, discovers, and updates your IP status and usage information in real time. An automated tool:

  • Includes active IP space and network monitoring.
  • Helps detect IP conflicts in the network.
  • Compiles historical data of IP address usage.
  • Manages DHCP and DNS servers on an organization-wide basis.
  • Helps simplify the transition from IPv4 to IPv6.

Manually tracking IP address usage is becoming less practical for organizations of all sizes. Trends such as BYOD, IoT, and Server Virtualization have greatly increased the consumption of IP addresses. Therefore, it’s apparent that automated IP address management is a more reliable way of tracking IP address usage.

Configure IP Address Lease Durations Based on IP Usage and Availability - Planning & Provisioning

All DHCP servers are configured with a lease configuration that contains information on the duration for which an IP address is leased out to a particular device/client. The DHCP server leases the IP to the client for a preset length of time and the client can automatically renew the lease further, if required. The lease duration has a direct impact on IP consumption in the network. Consider this fact when you configure the duration of leases.

For example, when a client leases an IP address, it retains ownership of the IP for the specified lease period. If the client needs the IP address for only a few minutes and the lease time is set to a few days, the IP remains tied up when it’s actually not being used. Here are a few tips for ensuring that IP addresses are used efficiently:

  • Set the DHCP server lease terms based on the types of clients that the DHCP server services.
  • Keep the lease duration for the DHCP server that services BYOD relatively short (i.e. a few hours).
  • Try to keep the lease durations of the mobile/wireless devices coming in and out of your network short.
  • Ensure that your DNS servers are updated with all of the leases, lease renewals, and lease expirations.

Proactively Identify IP Conflicts or Utilization - Monitoring, alerting, & Troubleshooting

You actually serve two purposes when you actively monitor your network: Avoid network performance issues and troubleshoot problems. This helps keep you aware of situations that can cause network downtime or slow employee productivity. Active monitoring adds a layer of security by detecting occurrences such as the presence of a rogue device on the network or an unrecognized MAC address in that subnet.

Obviously, you want to make sure that your reservoir of IP addresses never runs dry so you can ensure that all end-users can connect to the network. Tracking IP address consumption over time helps you project the eventual depletion of your IP addresses and make the necessary plans. Also, being alerted about events like IP conflicts, full scopes, DNS record mismatches, etc. is helpful for effective IP management.

Performing all these tasks using an IP management solution makes your IP address management life easier. A tool that tightly couples DNS and DHCP keeps you more organized because IP addresses and DNS records are updated simultaneously, which prevents problems resulting from outdated host records. Also, make sure that your IP management tool can function as a management layer that provides reliable interoperability between heterogeneous DNS and DHCP solutions.

Read this whitepaper to know more about best practices for configuring & monitoring DHCP, DNS, and IP addresses.

IPAM Blog-DDI Best Practices-Banner.png

It’s often a common complaint for IT pros in desktop support teams that they are spending more time managing service requests than actual IT support. This is because there is no easy way to track and manage service requests; there’s just too much of complexity around managing service requests when dealt with manually. As enterprises are realizing the value of IT service support solutions to improve their service delivery, there are many small companies that are still depending on disparate channels for logging incidents via phone, email, chat, and even verbal communication, and Excel sheet as a manual workaround for help desk. This is certainly not the most efficient means to handle service requests if you really want to save time and invest that in actual end-user support, and improve productivity. So, what are the complexities faced in IT service request handling and how can IT pros be relieved of all these ad hoc and time-consuming management chores?



#1 Manual Ticket Assignment: Disappearing Time & Disorganized Process

When there is a ticket logged, the first thing you need to be able to do is sort it based on where it came from, what is the type of issue, what is the priority level, and assign it to the right technician who is available and has the skill-set to handle the ticket. Automation is key. Without automating the ticket assignment process, you will end up spending a lot of time analyzing all these parameters manually. With the help of pre-defined ticket assignment and routing logic built into a help desk system, you will be able to streamline the process and ensure each service request gets logged and immediately assigned to the right technician.


#2 A Bitter Tryst with Service Fulfilment Delays & Escalations

Before you start addressing service fulfilment delays, you must ensure you track time spent on incidents. Once you know what your average ticket response time and ticket resolution time are, you can start building baselines and set up SLA-breach alerts. This is being proactive than being reactive. Before you end up causing a delay on a ticket due any unplanned reasons, you will be alerted that you are nearing the SLA timeframe. This will help you make service delivery more seamless and prompt. Escalations are another issue when SLAs are not met. There has to be a strategic approach to planning and assigning an escalation workflow so that there is the next technician, group of technicians, or the next-level support personnel (manager or whoever is in charge) that gets, and the SLA-breach or client complaints are thus escalated.


#3 Ticket Resolution Status: Kept In the Dark

As a help desk technician or a desktop support agent, you need to be able to track and manage the status of tickets from creation to resolution, and also record all details for historical analysis. An Excel sheet is NOT the tool for this job when you have hundreds of service requests pouring in each day and you decide to sit and modify status on a spreadsheet. Whether it’s a single technician’s tickets, or tickets for a support team, or entire IT department, a help desk software allows you to capture, track and monitor status of tickets with status management dashboards and notifications.


Help desk software helps you automate and streamline your ticketing management process by providing the flexibility to create your own workflows and dynamic ticket assignment rules. Service request management should not be made complex when there are ample functionality with help desk to simplify that. When you have simplified ticketing management, you would have gained more time for actual IT support and issue resolution.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.