The development of Cisco's Smart Business Architecture (SBA) for Mid-Sized Networks was designed to help you narrow down the number of choices available when designing your network. Cisco SBA provides a series of blueprints designed to simplify solutions for businesses with between 250 and 1000 employees. Specifically, the SBA can help you:
SBA provides recommended hardware selections and combinations, network designs, and even includes recommended configuration templates. It makes the process of acquiring gear, designing the network, and configuring everything to work together much easier.
If you haven't had a chance to checkout the Cisco SBA I highly suggest that you do. You can read more about it here. It can significantly simplify a sometimes tedious and confusing process
See how SolarWinds fits into your SBA solution:
Cisco SBA Network Design Guides-
Also, check out the Cisco SBA blog for Government Solutions.
The IPAM DHCP Split Scope Wizard
Split scopes are used to for several reasons, either to perform load balancing between two DHCP servers, or to ensure high availability DHCP services for your network.
When you split a scope, the primary server is responsible for a certain group of IP addresses, and the secondary is responsible for the remainder. An offer delay (generally between 1000 and 5000 milliseconds) is set for the secondary server to ensure that if the primary server is unable to provide an IP address within the offer delay time, the secondary server will do so using its pool of addresses.
Scopes are usually split into one of two configurations:
You start with a scope01 on your primary DHCP server. Scope01 includes the entire subnet of 10.10.10.0/24 (254 IP addresses), with no exclusions. You split scope01, and name the second scope scope02 on your secondary DHCP server. You choose an 80/20 split.
Now, scope01 will still span the entire subnet, but will exclude the last 20% of the addresses in that subnet (10.10.10.204-254). Scope02 will also span the entire subnet, but will exclude the first 80% of the addresses in that subnet (10.10.10.1-203).
IPAM 3.1 now includes a Split Scope wizard that minimizes the guesswork.
Here is a quick summary of how the wizard works.
To open the split scope wizard, click the DHCP & DNS Management tab >> DHCP Scopes >> select a DHCP scope to be split >> click Split Scope.
Note: To perform the DHCP split scope operation, ensure you have two DHCP servers added to the IPAM.
Defining Split Scope» The split scope wizard shows you the source DHCP server selected for the split scope operation, and allows you to select the target DHCP server to where the scope and its IP addresses need to be split.
Define the Source & Target DHCP Servers for Performing Split Scope Operations.
Range Distribution» This step allows you to specify the percentage of IP addresses to allocate to the source and target DHCP scope servers.» You can just drag the percentage scale to set the split percentage as required. The IP addresses within the DHCP scopes will be changed accordingly to reflect the percentage split.» Or, if you have specific IP address ranges decided for both the servers, you can just enter them in the Include IP Addresses & Exclude IP Addresses text fields, and the percentage scale will be adjusted accordingly.
Once range distribution is complete, click Finish, and you will get a pop-up window confirming the successful split scope operation.
Looking Up the IP Address Split (After Performing the DHCP Split Scope Operation)» You can now go to the DHCP Servers tab and do a mouse-over on both the source and target DHCP servers and see the IP address range according to the split Looking up the IP Addresses on the DHCP Servers after the Split Scope Operation
For a more detailed steps walkthrough see: How To Perform DHCP Split Scope using SolarWinds IP Address Manager
For more info on all the new features in IP Address Manager 3.1 see:
SolarWinds IP Address Manager (IPAM) offers powerful and centralized management of Microsoft DHCP
Using SolarWinds IPAM you can easily
• Add new or edit existing Microsoft DHCP servers and scopes.
• Set, update or delete reservations, reservation status and DHCP properties, including IP ranges and
SolarWinds IPAM solution allows you to manage both Microsoft DHCP servers and Cisco IOS DHCP
Adding a DHCP Server
Note: All DHCP servers must already exist as nodes before IPAM can monitor them.
There are two options for
• Entering nodes manually one at a time
• Using the Network Discovery Wizard to add multiple nodes.
Once the DHCP server is added as a node in Orion server, you can add it to the IPAM web console by clicking
IP Addresses tab >> DHCP & DNS Monitoring >> DHCP Servers >> Add New >> DHCP Server
This will open up the Add DHCP Server page. Now, you can choose the required DHCP server from the list of
nodes (already discovered by Network Discovery Wizard or manually added) and create or choose credentials.
Click Test, and once the test is successful, click Add DHCP Server to IPAM web console.
Now you have successfully add a DHCP server to IPAM. From here you can begin to Edit the Properties for each server, split scopes and assign reservations, if needed.
In larger organizations, it is very common for different people to have responsibilities to manage different blocks of subnet address spaces for their respective departments/divisions/regions. SolarWinds IPAM provides the ability for your IP Address Management tasks to be divided up amongst different people/groups, such as functional groups, geographic regions, virtual server teams, and critical staff.
Perhaps you want to allow your desktop team to have visibility of ip scopes for a particular office floor of vlan's, but without views secure into web infrastructure networks. Beginning with version 3.0, IPAM enables the definition of user access roles based on subnet, group or supernet basis.
Specify which users have what level of permissions (read/write) to certain address spaces (Group, Supernet, or Subnet). It is important to note that if subnets are moved that create hierarchy changes, the inherited roles will be inherited from the new parent.
Any existing customized roles will not be changed or inherited.
When deciding which roles will work best in your environment, determine what is the user really needs access to on a daily basis. The following IPAM user roles are available:
Read/write access and can initiate scans to all subnets, manage credentials, custom fields, and IPAM settings and full access to DHCP management & DNS monitoring.
Power Users can reorganize network components in the left pane of the Manage Subnets and IP Addresses view and full access to DHCP management & DNS monitoring. This role also includes the ability to edit properties and custom fields on portions of the network made available by the site administrator.
The Operator role has read-only access to DHCP Scope, Servers, Reservations, and DNS Servers, Zones, and Records.
These users can also add and delete IP address ranges from portions of the network made available by the site administrator. They can also change the subnet status selection on the Manage Subnets and IP Addresses page. Manage IP address property and custom fields, and edit IP address properties on portions of the network made available by the site administrator.
This role will have Read only access to to all subnets and DHCP Servers, Scopes, Leases, Reservations and DNS Servers, Zones, Records.
This role is defined on a per subnet basis. DHCP and DNS access will depend upon the Global account setting for those nodes.
In a nutshell - after selecting Custom, click Edit to define what the user can and cannot see.
Next you select the desired subnet and define which role this user will have.
Make note of the inherited column on the far right to determine the correct inheritance is being applied.
The following is a good example of the differences a user with a custom role can or cannot see.
If you are interested in detailed steps for setting up IPAM user delegation see this post.
Below is an overview of the all the role operations.The color coded legend is as follows:
The following table below details the various operations that each role can have.
Overloading your Kiwi Syslog Server can occur in many ways.
The first and most obvious way, is when there is a non-zero value in the "Message Queue overflow" section of the Kiwi Syslog Server diagnostic information.
A non-zero value indicates that messages are being lost due to overloading the internal message buffers. This can be verified by viewing your diagnostic information:
Go to the View Menu > Debug options > Get diagnostic information (File Menu > Debug options, if running the non-service version). If you see non-zero values then you know you have a problem.
The second way overloading occurs is when the "Messages per hour - Average" value in the Kiwi Syslog Server diagnostic information exceeds the recommended "maximum" syslog message output that Kiwi Syslog Server can nominally handle. This value is around 1 - 2 million messages per hour average, depending on the number and complexity of rules configured in your Kiwi Syslog Server.
If either of these two scenarios is true for your current Kiwi Syslog Server instance, then load balancing your syslog message load can mitigate any overloading that may occur.
To load balance Kiwi Syslog Server, start inspecting your Kiwi Syslog Server diagnostic information, specifically looking for syslog hosts that account for around 50% of all syslog traffic. These higher utilization devices are candidates for load balancing. This is accomplished through implementing a second instance of Kiwi Syslog Server.
For example, consider the following "Breakdown of Syslog messages by sending host" from the diagnostics information.
Breakdown of Syslog messages by sending host
Top 20 Hosts
From these diagnostics, you can see that 22.214.171.124 and 126.96.36.199 account for >50% of the syslog load. Most of the time, 50% of all syslog events come from one or two devices, and this is indeed the case here.
To enable a load balanced Kiwi Syslog Server configuration, perform the following actions:
In part 1 we discussed the following points:
The other component to Server Monitoring is Hardware related.
Hard Disk Monitoring:
The hard disk is the device that the server uses to store data. The data stored is permanent (survives a reboot unlike RAM) and is available till it is consciously erased by the end user.
It’s important to monitor hard disk for a couple of reasons: The operating system needs space on the disk for normal operating processes including paging files and caches. The applications running on the server also needs space to write temporary data to cache. Low free space on a drive is one of the reasons for file system fragmentation which causes severe performance issues.
Server Hardware Monitoring:
A server has many hardware components that need monitoring. Server performance monitoring issues may be due to a malfunctioning or failing component in the hardware.
The fan draws heat away from the CPU by moving air across a heat sink to cool a particular component. If the CPU fan fails, the server will eventually overheat causing your server to become unavailable. To prevent this from happening, you should monitor CPU Fan speed. Monitoring the historical data of fan RPM is one way you can keep a watch for any sudden spikes in the fan RPM.
A power supply unit (PSU) converts AC to low-voltage regulated DC power. To have visibility into this metric it’s important to monitor amperage, voltage, and wattage of the power supply.
This refers to the temperature of the system board or mother board. Unusually high temperatures can cause permanent damage to the server, and will affect server performance adversely. Safe working temperature limits can be obtained from the manufacturer. These must be monitored to ensure they do not exceed this safe range for efficient server monitoring.
Temperature, air flow, and humidity are important parameters to monitor. Problems with temperature may be a direct result of faulty A/C, improper air flow, and dangerous humidity levels. Other Components to Monitor include CMOS Battery, Disk array health, Intrusion detection and CPU hardware status.
Click here to learn more about the various server and application monitoring tools SolarWinds offers.
What to look for in Server Monitoring
Let’s understand some basics of Server Monitoring and what you should look for to proactively monitor your servers, and data you need to troubleshoot your issue quickly.
Server performance monitoring is the process of automatically scanning servers on the network for irregularities or failures. These scans allow administrators to identify issues and fix unexpected problems before they impact end-user productivity.
Server and application monitoring will also help you determine if the issue that’s affecting your application is really a problem with the network or with a server, and can help identify a root cause.
Key Server Components to Monitor for Server Monitoring:
When the CPU usage impacts server performance, you have to either, upgrade the CPU hardware, add more CPU’s, or shut down services that are hogging these critical resources. Charts and graphs can help you visualize CPU load over time to determine normal usage – match workloads to CPU capacity to save on power.
If a server runs out of RAM, it sets up a portion of the hard drive as virtual memory and this space is reserved for CPU usage. This process is called swapping and causes performance degradation since the hard drive is much slower than RAM.
Swapping also contributes to file system fragmentation which degrades overall server performance. So, it’s important to have constant visibility into RAM usage. One way to balance rising RAM usage is to add more RAM. This is an economical way to boost server performance. You should monitor RAM utilization to determine which processes are causing spikes in usage for effective server monitoring.
In Part 2 we'll discuss Hardware Aspects of Server performance monitoring.
If you have not already started you IPv6 transition planning, now is a good time. Think of where your business will be in 3 years, and how you'll interface with the rest of world. This will provide a clearer picture of the action that is required. Once IPv4 gets depleted, you will still be able to communicate with all other IPv4 hosts out there, but you cannot reach any IPv6 hosts. This is what you are wishing to avoid. Don't get too far behind in the planning phase of your inevitable v4 to v6 transition.
The first thing you should do when conducting an IPv6 readiness audit is make an inventory of your network. Include all of your systems and document any mystery devices you find. Inventory all devices inclduing firewalls, routers, switches, and load balancers.Your systems guys would need to audit server operating systems and key infrastructure applications like DNS and e-mail. Application owners will likely need to audit each of their applications, both the transport layers, as well as any user interfaces where IP addresses are entered. For your DBAs, include any place where an IP address is referenced. This is by no means a complete list, but you get the picture.
If you are currently implementing Windows7 or Server 2008/2008R2 then remember you are already adding IPv6 into your IT environment by default. Consequently, a remediation plan is necessary to maintain control of your infrastructure until you are ready to fully enable IPv6.
SolarWinds IP Address Manager can help you coordinate your migration to IPv6 by providing the ability to add IPv6 sites and subnets for planning purposes. IPv6 addresses can then be grouped to assist with network organization. Use the discovery tool to start your inventory task. Know what is out there-
To obtain a better understanding of the current state of IPV6 readiness with SolarWinds products see this product blog.
To test IPv6 on your current browser- click here.
Basic Configuration Management Strategy
According to Enterprise Management Associates, approximately 70% of network issues are caused by configuration issues, such as typos, inability to track changes, deploying inconsistent or non-standardized configurations. Unless you implement a strategy to avoid such issues, network issues will occur more frequently, especially when device configs aren't backed up, rollbacks are not an option, and change tracking is not in place.
If you are new to managing networks, one topic that you should explore is how to increase network availability by using configuration management. There are several affordable configuration management tools on the market that can help automate the process. A basic Configuration Management strategy would consist of three options.
Employing a tool that tells you when something has changed and by whom will eliminate the blame game. When network engineer's use Telnet or SSH to make changes to a remote device, you need to know. Having a notification process in place allows you to rollback the change if necessary. Some tools, such as NCM, will provide before and after config change details that can tell you who and or why the change was made, and in what context. NCM also allows you to schedule these change notifications be setup as real time change detection based on syslog traps.
Use a tool that contains a bulk change mechanism. This helps eliminate CLI typos, which all humans are susceptible to making. When making bulk changes to many devices, logging is crucial. With NCM, all the commands are recorded so you can determine what went wrong when troubleshooting.
Best practices suggest you schedule the bulk changes during off hours, which is nice because you don't have to be stuck there doing it. NCM allows you to run changes on the fly as well. For example, you can make changes only to specific devices that contain a certain IOS version.
Do you know what devices you have out in the field? Sometimes users at remote sites may buy their own wireless access points and switches and add them. Inventory management will help you learn if routers get moved from site to site, help with device failure swap outs, and monitor theft/loss prevention.
The NCM inventory reports will list important details which are critical for understanding device failures, such as which card or components are installed, what IOS version, etc... If devices are added frequently then you can schedule nightly network reports, which are customizable. Set it up to report about a group of devices (specific) or exclude certain things like IP Route table on your core routers.
Where these reports really come in handy is with maintenance renewals for your hardware. With Cisco devices, you'll need chassis ID, which can be pulled in any report. Reports will also help you determine which devices are in commission or out of commission.
When you are tasked with managing routers, switches, firewalls, load balancers, VPN connectors and other network devices that typically have a text or menu based config, consider the three strategies to make the job easier.
The performance of applications across the WAN are faced with many issues, such as latency, congestion, and low bandwidth. Your approach to determine how to troubleshoot the issues will vary based primarily on your budget.
For those on a limited budget, consider the following lost cost monitoring and troubleshooting options. If your network is slow then use the Netflow that comes embedded within the IOS of your router. Netflow can tell you things like the number and destination of packets coming from the interfaces of a router, as well its amount of packet load traffic. The statistics you collect will allow you to create a baseline for your network. It is from this baseline you can begin to learn when and where your bandwidth issues occur. Netflow is easy to setup and generally consists of a few lines of configuration to enable it. You can easily check to see if your Cisco devices support Netflow by going to here a performing a technology search.
Keep in mind that vendors use different types of flow technologies.
If you are monitoring remote site performance then consider setting up Cisco IP SLA. Cisco IP SLA is also embedded in various routers and switches and can be enabled depending on the IOS. In fully meshed or Ethernet style networks, IP SLA will give you an understanding from a remote perspective. IP SLA statistics can be viewed from a command line or leveraged using a free tool. From the free SolarWinds IP SLA Monitor tool you can assign thresholds for up to five operation types. Define your warning and critical thresholds per operation and the tool will then go out to the router to perform queries. The results are pulled directly from the router of your operations and presented in a dashboard. With data from multiple operations, such as UDP echo, ICMP path echo (ping), TCP connect time, DNS resolution, and HTTP response, you can begin to troubleshoot your remote routers and prioritize bandwidth.
Another avenue to consider would be CBQOS. CBQoS (Class Based Quality of Service) is a Cisco feature set that is part of the IOS 12.4(4)T and above. This will provide data about the QoS policies applied and traffic patterns within your network. CBQoS can make network performance and bandwidth utilization more effective.
Use the network monitoring tools within your existing routers, such as Netflow, to monitor core CPU's , traffic over major links. Or a simple NetFlow analyzer tool can help you analyze the flow packets and monitor network traffic.
In the case of Cisco devices, use IPSLA, CBQOS, Netflow to help you monitor and troubleshoot network issues.
Some files within your storage environment can obviously be discarded, but how do you know which files to keep?
How do you know what kinds of files are being stored, their ages, and who owns them, so that you can clean out your storage space?
You could use a feature within Storage Manger , the storage performance monitoring software, called File Analysis. File analysis lets you determine the age, type, and ownership of the files stored across all your servers, virtual machines (VM), and network-attached storage (NAS). You can setup rules for file analysis on physical servers and have it run on a scheduled basis. The output provides reports which can then be shared with others. Storage monitoring simplified!
When File Analysis runs, it will look at the meta data of the file and not the contents. It can place a load on the system while it runs, generally 15-20% of CPU, so generally it’s best to run file analysis at night outside of any backup window.
After file analysis runs, you can review from the summary file analysis all the files on that server. By default, you will get summary by file type, file age and owner. You can then drill-down from the array, server, or VM to a single file to find more information. File Analysis will also locate orphan files, files created in the last 24hours, and identify your largest files.
Here in the example below you can see all the MP3 audio files.
So with this information you can summarize your files and find specific ones you might want to take an action on. Some common use cases:
In summary the File Analysis feature in Storage Manager provides analysis of file type, size, age, and owner. After you have this data, you have the ability to reclaim storage by removing old or unwanted files, storage performance monitoring has never been this easy. You can also enforce company file policies by identifying MP3 files or mpg, etc…
With the growth of virtualization, mobile devices, and the cloud, management of your IP Address space assignments has become increasingly challenging. Using static spreadsheets to manage your space is no longer a viable method. Not just at the Enterprise level, but for mid-size and smaller shops, space assignments and statuses change too rapidly to keep track of manually. Add to that any unforeseen growth through acquisitions, staff changes, and or changes through a location move and you quickly feel the drawbacks of relying on static spreadsheets.
Reasons to Replace:
Although simple to use, spreadsheets are not scalable and they can be difficult to access remotely. If by misfortune, fate, or destiny, you are one of the administrators assigned to the role of manually updating spreadsheets, there is a solution. You can automate the processes mentioned above using a simple software solution.
Your spreadsheet replacement tool should automate the discovery of any new devices added to your environment. Every time new addresses are allocated, someone (you!) will not need to manually update spreadsheets.
Look for a tool that will notify you of impending doom, such as IP address conflicts, with alerts or emailed scheduled reports. Inferring from spreadsheets to determine which IP Addresses are available is very 1990s. It is after all, the 21st century.
Can your spreadsheets setup alerts for scope utilization and critical subnet usage?
Perhaps you would benefit from reports that you can customize that provide an audit trail that shows you who made what changes and when.
Have a look at IPAM and test out the Spreadsheet Import Wizard. The IPAM Import wizard provides an intuitive GUI that allows you to select which columns you want to import. The wizard also respects user delegation roles when you have multiple administrators responsible for particular subnets. The wizard imports IP addresses and/or subnet structures including folders/groups automatically, allowing you the option to organize them manually via the drag and drop feature. Don't worry MSPs, duplicate subnets are supported.
Are you planning for IPv6 migration yet? It's never too early to start migration planning. IPv6 addresses are difficult to manage using spreadsheets. Just imagine having the 128 bit number expressed in eight hex blocks, each block having four characters, thus increasing the possibility of errors as you type them into a spreadsheet.
How does IPAM work? IPAM uses ping sweeps, SNMP scans, and Neighbor scans, which scan ARP tables, to retrieve up to the minute statuses and data. You can see all events and space utilization summaries of your network on the summary web page. You can drill down to the details page to see the IP Address, MAC Assignment, and Hostname Assignment histories.
Also, IPAM can manage DHCP scopes and IP reservations directly from the IPAM web console, thus reducing the time modifying RDP connections to the DHCP server console.
For further information see the IPAM Administrator's Guide.
Too small of a shop or don't have a budget? Try the free IPAM tool-
Hi, I’m Joel Dolisy, chief architect for SolarWinds; over the next few months I’ll try to provide insight of what’s happening in the engineering side of the house.
As Josh mentioned a few weeks ago I’m researching the types of failover scenarios we need to support in Orion, and how those relate to the existing hot-standby polling engine strategy. As usual, the complexity of the solution depends on the level of failover support that the product needs to operate under. Claiming that a solution is fail safe is actually a complicated problem to solve and involves the interaction of different players such as the OS, the hardware (computer and network infrastructure), the storage subsystem and the application.
As part of the research, I’m considering if leveraging the built-in clustering services offered by the OS (such as Windows Server 2003) would be something that would be acceptable for our customers. Leveraging those services has lots of advantages from an engineering standpoint as we can concentrate on building features that are actually useful for you instead of having to write more plumbing, but on the other side such a solution comes with its own set of constraints. Every time I encounter additional constraints I always have to weigh the benefits of the proposed solution against its additional configuration burden.
One of the big assets of our products is their simplicity of installation/configuration and that they don’t require days of planning before a rollout like a lot of our larger competitors. Every time something has the potential of disturbing this ease of use/configuration I need to make sure that I understand under which circumstances those additional configuration steps will impact our customers.
I feel that customers asking for failover capabilities are ready to take additional configuration steps required by such configurations, in fact, most of those users will already have SQL Server clustered and requiring the OS clustering service in this case should not be an issue.
At this point I would really like to hear from you what your thoughts/comments/requirements/experience on failover scenario are and how do you expect us to support them.