1 14 15 16 17 18 Previous Next

Geek Speak

1,238 posts

It’s October!!

STOP. THINK. CONNECT

Yes, you are right, it is the National Cyber Security Awareness Month (NCSAM)

 

 

This year, we are celebrating the 10th anniversary of NCSAM, which is sponsored by the Department of Homeland Security and the National Cyber Security Alliance.

With the continuous advancement in technology, those trying to access your personal information are also growing smarter. The core intention of NCSAM is to educate online consumers and businesses about cyber security issues and the best practices to avoid them.

 

 

NCSAM at SolarWinds – What does it mean to you?

It’s that time of the year, when you look back at your security and safety precautions, understand the consequences of your actions and behaviors online, yet enjoy the benefits of the Internet. To help you do this, we have compiled our best resources that can be readily used to measure your current level of security.

 

Have you visited our all new NCSAM page yet??

For the month of October, we will be discussing cyber security issues every week as follows:

 

Week

Dates

Area of Discussion

Week 1

October 1-6

General Security

Week 2

October 7-13

Being Mobile: Online Safety & Security

Week 3

October 14-20

Cyber Education

Week 4

October 21-27

Cybercrime

Week 5

October 28-31

Cyber security

 

 

Keep an eye on this space, there’s more coming!!


1309_security_525x133.jpg

Before we start to understand how this works, you need to know that SolarWinds Alert Central is a FREE SOFTWARE - totally, completely, free for life! 

  

Once you have installed Alert Central and linked it with your Orion (NPM, SAM, IPAM, NCM or other Orion-based products from SolarWinds), all your Orion alerts will be sent to Alert Central and any updates made to the alert in Alert Central will also be updated on Orion. It’s just a simple 3-step process to establish this integration:

 

#1 Configure Alert Central’s Email Settings

  • You need a new email account for managing alerts with Alert Central
  • Alert Central can receive email from POP or IMAP, with or without SSL, or with a direct connection to Microsoft® Exchange Web Services
  • Alert Central uses standard SMTP with or without SSL and authentication to send outbound email

 

#2 Configure Orion to Generate Alerts for Different Alert Conditions

  • Use Orion Basic Alert Manager of Advanced Alert Manager to define alert conditions and determine when they are going to be triggered
  • By default, Orion is set up to send alerts to Alert Manager if you are running Orion system with Core v 2012.2.0, or if you are using versions NPM 10.4 / SAM 5.5 or above.

 

#3 Configure Routing Rules in Alert Central

  • Add your Orion source credentials to Alert Central
  • Configure the routing rules for Alert Central to route alerts to recipients based on custom conditions.
  • The most common scenarios for configuring alerts are:
    • Routing all alerts to a single group
    • Routing alerts from different locations or network segments to different groups
    • Routing alerts with different keywords to different groups

 

  AC.png

 

  • You can also add different Orion properties in Alert Central to route specific alerts and test them on the Alert Central console to ensure they work well
  • Best Practice: Set up a default alert group to receive all alerts which do not meet any of your custom routing conditions

 

Once you have configured your alert routing conditions you can set up escalation policies and on-call calendaring for each group to ensure alerts go to the right user.

 

Watch this short video to learn how to configure Orion alerts with Alert Central.

 

 

About SolarWinds Alert Central

SolarWinds Alert Central is centralized IT alert management software that provides amazingly streamlined alert management for your entire IT organization. It consolidates and manages IT alerts, alert escalation, and on-call scheduling to help ensure all your alerts get to the right people, in the right

groups, at the right time. And Alert Central integrates with just about every IT product that sends alerts, using a simple set up wizard to walk you through the process.

 

  • Centralize alerts from all your IT systems
  • Automatically route alerts to the right group
  • Filter alerts from noisy monitoring systems
  • Automatically escalate unanswered alerts
  • Easily create and use on-call calendars

 

And the best part is that SolarWinds Alert Central is completely FREE! Download Alert Central - for VMWare or Hyper-V

 

AC 2.png

To be in the first flush of network issues before they turn into a nasty outage, you need to be on the watch for a few important network monitoring parameters. With so many factors, objects and interfaces, the burning question is – What to monitor?

 

Knowledge about main causes of network downtime helps but, a lot depends on the design of your network, the devices, services running, and so on. But, in general, what are the recommended critical parameters that need steady monitoring?

NPM_Blog-Image1.png

Be the first to know before it affects your users! Here are a few pointers to some important monitoring parameters are as below,

 

Availability and Performance Monitoring: Monitoring and analysis of network device and interface availability and performance indicators help ensure that your network is running at its best. Some of the factors that influence good network availability are:

 

  • Packet loss & latency
  • Errors & discards
  • CPU, memory load & utilization

 

Detailed monitoring and analysis of this data for your network elements and timely alerting on poor network conditions like slow network traffic, packet loss, or impaired devices helps safeguard your network from unwarranted downtime.

 

Errors & Discards: Errors and discards are two different objects in the sense that errors designate the packets that were received but couldn't be processed because there was a problem with the packet. Discards are those packets that were received with no errors but were discarded before being passed on to a higher layer protocol.

 

Large number of errors and discards reported in the interface stats is a clear indication of something that’s gone wrong. A further investigation into the root cause will help identify the issue which can be quickly resolved.

 

Device Configuration and Change: Non-compliant configuration changes are a major cause of network downtime. Not knowing what changes are made and when, is even more dangerous for your network. Create a system that monitors and approves device configuration changes before they are applied to production. Setting up alerts to notify you whenever a change is made helps you maintain control over your device configurations being the cause of network downtime. 

 

Syslog and Trap Messages: Syslog and traps serve separate functions - syslog messages come in for authentication attempts, configuration changes, hits on ACLs and so on. Traps are event-based come in when some device-specific event has occurred like an interface had too many errors, high CPU usage, etc.

 

The advantage here is that, instead of waiting for your network management system (NMS) to poll for device information, you can be alerted on unusual events based on these syslog and trap messages.

 

Network Device Health Monitoring: Monitor the state of your key device components including temperature, fan speed, and power supply so that you do not find yourself in a network outage caused due to faulty power supply or an overheated router. Set pre-defined thresholds to be alerted every time these values are crossed.

 

So, start monitoring your critical factors that impact the availability and performance of your network and  Minimize Network Downtime!

 

To Learn more: Unified Network Availability, Fault and Performance Monitoring

Alright Security folks, WEBCAST TIME!!

 

What is this all about?

This webcast will discuss and showcase how various organizations are dealing with IT security.  You will get to know whether these organizations are implementing tools and techniques to deal with their security data analytics problems, are they automating pattern recognition, and many more.

 

 

When: Thursday, October 03 at 1:00 PM EDT

 

 

Why shouldn't you be missing this?

During June and July, along with SANS, we conducted a survey that was taken by 600+ security professionals to explore how organizations are dealing with their security data for better analysis and detection. In fact, we even found some shocking facts, for example, almost one-third of the security pros out there are still acting upon hunches when it comes to detecting security threats.

 

 

To stay updated, all you have to do is just attend this webcast.

 

 

Registration link: https://www.sans.org/webcasts/analyst-webcast-results-analytics-intelligence-survey-ii-96807

 

 

register_webinars.png

 

 

Speakers:

Along with the SANS Analyst Dave Shackleford, you will also have Nicole Pauls from the SolarWinds crew.

 

 

Nicole Pauls

Nicole Pauls is a Director of Product Management for Security Information and Event Management (SIEM) at SolarWinds, an IT management software provider based in Austin, Texas. Nicole has worked in all aspects of IT from help desk support, to network, security, and systems administration, to complete IT responsibility over the span of 10 years. She became a product manager to help bring accessible IT management software to the masses.

Because Microsoft® SQL Server® is such a widely used database, slowdowns within its environment can lead to issues for multiple applications. More often than not, the root cause of such slowdowns is usually memory bottlenecks. There are many issues that affect SQL Server performance and scalability. Let’s look at a few of them.

 

  • Paging: Memory bottlenecks can lead to excessive paging which can impact SQL Server performance.
  • Virtual memory: When your SQL Server consumes a lot of virtual memory, information will constantly move back and forth from RAM to disk. This puts the physical disks under tremendous pressure.
  • Memory usage: No matter how much memory is added to the system, it appears as though SQL Server is using all of it. This can happen when SQL Server caches the entire database into the memory.
  • Buffer statistics: When other applications consume lots of memory and your SQL Servers don’t have any then there can be issues related to page reads, buffer cache, etc.
  • Other: Memory bottlenecks can occur if databases don’t have good indexes, and applications or programs constantly processing user requests.

 

Monitor SQL Server Memory

You must continuously monitor your SQL Server in order to improve its overall performance. During this process, it’s vital to check the statistics (optimally, through alerts) of various performance counters related to SQL Server memory. This is especially true when you’re constantly adding more databases to your SQL Server. Along with memory resources you’ll also have to monitor CPU load, storage performance, physical and virtual memory usage, query responsiveness, etc., which also cause performance issues in the database.

 

Improve Your SQL Server Performance

If you’re really looking to improve your SQL Server performance, it’s imperative to understand your existing environment, and which performance counters you really need.

 

A server monitoring tool should provide out-of-the-box user experience for your SQL Server database. It will allow you to simulate end-user transactions and proactively measure the performance and availability of your SQL Server. The tool will also:

  • Ensure the availability and performance of your SQL Server.
  • Give you visibility into statistics, the health of your SQL Server, and then set performance thresholds.
  • Build custom reports to show SQL Server availability and performance history.
  • Get real-time remote monitoring of any WMI performance counters to troubleshoot application issues.

 

SolarWinds Server & Application Monitor (SAM) comprehensively monitors SQL Servers, and other Microsoft applications running in them. Try the fully functional free 30 day trial.

If you are considering HP Network Node Manager i (NNMi) for your network monitoring requirement, or if you already are using NNMi in your network and looking for a change, here are 5 concrete reasons why you should consider SolarWinds Network Performance Monitor (NPM).

 

#1 NPM is an easy-to-use network management solution

    • Intuitive Web interfacePicture1.png
    • Customizable and interactive dashboards and charts
    • No training cost or product management overheads

 

#2 NPM is an affordable enterprise-class software

    • Transparent pricing
    • Flexible licensing
    • No hidden costs

 

#3 NPM can be installed and deployed typically in under an hour

    • Do-it-yourself deployment
    • No professional services, or product delivery time
    • Download, install and start monitoring in under an hour

 

#4 Built by network engineers for network engineers

    • Purpose-built based on the needs of the IT community
    • Customer and community-driven product enhancement

 

#5 Buy only what you need

    • Unified network fault, availability and performance monitoring in one single software
    • No add-ons for network monitoring functionality
    • Modular architecture allows you to add other network management solution and integrate with NPM

 

More information on product comparison in this SlideShare


 

That’s not all!

 

For network performance and combined bandwidth monitoring and network traffic analysis, you can save up to 75%[1] over HP NNMi with SolarWinds Bandwidth Analyzer Pack (Network Performance Monitor + NetFlow Traffic Analyzer).

 

[1]^ Estimated cost savings for 500 nodes using SolarWinds Network Performance Monitor SL2000 and SolarWinds NetFlow Traffic Analyzer for Network Performance Monitor SL2000 vs. HP Network Node Manager Advanced + iSPI Performance For Metrics + iSPI for Traffic + iSPI for Multicast. Based on available August 2013 data.

   

Learn More About SolarWinds NPM

Troubleshooting polling issues can involve many steps and there are a few quick things we can try first before placing a support call. For example, say we have a piece of networking equipment such as a router or switch that is being monitored in Orion and suddenly the interfaces go unknown (gray) on the network device. This is telling us that SNMP information is not being recorded for the interfaces. More than likely we will not see any CPU or Memory utilization being reported either. To troubleshoot this issue we can do the following.

  1. Verify that the network device is configured correctly for sending SNMP traffic to the Network Management Software. A quick example would be verifying that the community string is correct.
  2. Run a packet capture on the server hosting the Network Management Software. We can poll the device from the Orion website by selecting “poll now” from Node Details. This sends SNMP (UDP 161) traps to the destination network device. While you are polling the device, run a sniffer trace using a program like or similar to Wireshark (formerly Ethereal). If there is a network issue, you will see SNMP frames leaving your Network Management Server, while no response is returned from the network device.
  3. If you do see frames returning from the network device, then this rules out network connectivity issues. Next verify there is no firewall blocking the SNMP frames on the server hosting the Network Management Software. If you are running a sniffer such as Wireshark, the incoming frames are being read at the Network Interface Card. A local server firewall such as Windows Firewall can still block SNMP frames from reaching the Network Management Software even though we see them hitting the NIC card. Note also that if a firewall is blocking SNMP traffic on the local server this will prevent any SNMP related information from being reported in your Network Management Software.

In summary verifying end to end connectivity between the Network Management Server and the device in question is the key and there are a few steps we can do to verify and possibly fix polling issues saving the end user some time.

WSUS Inventory collects server information and status information from the WSUS server and populates that data into the Patch Manager database. This data is collected via the WSUS API. An inventory task must be executed in order to use reporting.

 

Create the WSUS Inventory Task

There are several methodologies that can be used to create a WSUS Inventory task, but the simplest is to right click on the WSUS node in the console, and select "WSUS Inventory" from the menu.

WSUSInventoryMenuOption.png

The first screen presented is the WSUS Inventory Options. They provide the ability to handle certain advanced or complex inventory needs, but in 99% of instances, these options will be left at the defaults. In a subsequent post I'll discuss these four options in greater detail. Click on Save.

 

On the next screen you have the standard Patch Manager dialog for scheduling the task. Schedule the inventory to occur as needed. Typically the WSUS Inventory task is performed daily, but there are scenarios in which you may wish to perform the inventory more, or less, frequently. Be careful not to schedule the WSUS Inventory concurrently with other major operations, such as backups or WSUS synchronization events.

 

WSUS Extended Inventory (Asset Inventory)

In addition to the WSUS server, update, and computer status data, it is also possible to collect asset inventory data via the WSUS Inventory task. If the WSUS server is configured to collect this asset inventory data, it will be automatically collected by the WSUS Inventory task. To enable the WSUS server to collect asset inventory data from clients, right-click on the WSUS node in the console, select "Configuration Options" from the menu, and enable the first option "Collect Extended Inventory Information".

 

Using System Reports

With the inventory task completed, you now have access to several dozen pre-defined report templates in the two report categories named "Windows Server Update Service" and "Windows Server Update Service Analytics". The data that is obtained from the WSUS server is re-schematized within the Patch Manager database, optimized for reporting, and presented as a collection of datasources that are used to build reports. To run a report, right click the desired report, and select "Run Report" from the context menu.

 

Category: Windows Server Update Services

In the "Windows Server Update Services" report category there are 24 inter-dependent datasources available. Ten of them provide 327 fields of basic update and computer information, along with WSUS server data. The fourteen datasources named "Update Services Computer..." provide access to 111 fields of asset inventory data collected by the WSUS Extended Inventory.

 

Category: Windows Server Update Services Analytics

In the "Windows Server Update Services Analytics" report category there are nine self-contained, independent, datasources. The "Computer Update Status" datasource is the basic collection, and the other eight are based on modifications of this datasource, either by adding additional fields, or filtering the data.

 

In subsequent articles we'll look in more detail at how to customize existing reports and how to build new reports, including a more in-depth look at datasources and the WSUS Inventory Options.

 

If you're not currently using Patch Manager in your WSUS enviroment, the rich reporting capabilities are a great reason to implement Patch Manager.

It’s been 35 years since the very first solid-state drive (SSD) was launched. This went by the name “solid-state disk”. These were called “solid-state” because they contained no moving parts and only had memory chips. This storage medium was not magnetic or optical, but they were solid state semiconductors such as battery-backed RAM, RRAM, PRAM or other electrically erasable RAM-like non-volatile memory chips.

 

In terms of benefits, SSDs worked faster than a traditional hard drive could in data storage and retrieval – but this came at a steep cost. It’s been a constant quest in the industry, over the years, to make the technology of SSD cheaper, smaller, and faster in operation, with higher storage capacity. A post in StorageSearch.com shows the development and transformation of SSD, over the years, since its first usage and availability until now.

 

Why Storage and Datacenter Admins Should Care?

More than the user using the PC or notebook, it’s the storage admins who are spending time managing and troubleshooting the drives for detecting storage hotspots and other performance issues. And it’s imperative that storage and datacenter admins understand the SSD technology so they can better apply and leverage the technology in managing the datacenter.

 

Application #1 – Boosting Cache to Improve Array I/O

A cache is a temporary storage placed in front of the primary storage device to make the storage I/O operations faster and transparent. When an application or process tries to access the data stored in the cache, this can be read and written much quicker than from the primary storage device which could be a lot slower. All modern arrays have a built-in cache, but SSD can be leveraged to “expand” this cache, thus speeding up all I/O requests to the array.  Although this approach has no way to distinguish between critical and non-critical I/O, it has the advantage of improving performance for all applications using the array.

 

Application #2 – Tiering – Improving Performance at the Pool Level

SSDs help storage arrays in storage tiering by dynamically moving data between different disks and RAID levels in meeting different space, performance and cost requirements. Tiering enables storage to pool (RAID group) across different speeds of storage drives (SSD, FC, SATA) and then uses analysis to put frequently accessed data on the SSD, less frequently on FC, and least frequently on SATA.  The array is constantly analyzing usage, and adjusts how much data is on each tier. SSD arrays are used in applications that demand increased performance with high I/O. It is often the top tier in an automated storage tiering approach. Tiering is now available in most arrays because of SSD.

  

Application #3 – Fast Storage for High I/O Applications

Since arrays general treat SSD just like traditional HDD, if you have a specific high I/O performance need, you can use SSD to create a RAID group or Storage Pool.  From an OS and application perspective, the operating system understands the RAID as just one large disk whereas since the I/O operations are spread out over multiple SSDs, thereby enhancing the overall speed of the read/write process. Without moving parts, SSDs contribute towards reduced access time, lowered operating temperature and enhanced I/O speed. It should be kept in mind that SSDs cannot handle huge data, and data for caching should be chosen selectiv; virtualizatioely based on what data will require faster access to – based on performance requirement, frequency of use and level of protection.

 

SSD Benefits in Virtualization

In a virtualization environment, there is the fundamental problem of high latency with host swapping primarily in traditional disks compared to memory. It takes only nanoseconds for data retrieval from memory whereas it takes milliseconds to fetch from a hard drive. When SSDs are used for swapping to host cache, performance impact of VM kernel swapping reduces considerably. When the hypervisor needs to swap memory pages to disk, it will swap to the .vswp files on the SSD drive. Using SSDs to host ESX swap files can eliminate network latency and help in optimizing VM performance.

 

Conclusion

The application of Solid-state Drives has become significant in achieving high storage I/O performance. Proper usage of SSDs in your storage environment can ensure your datacenter meets the demands of today’s challenging environments. If you are interested to learn more about the advantages of SSD over Hard Disk Drives (HDD), you can look at this comparative post from StorageReview.com.

Network administrators work endlessly to maintain high quality network services for business critical processes and applications to run smoothly. They are expected to monitor and deliver high network performance, without any downtime. While, achieving this has been daunting for network administrators, the need for ‘high network availability’ has increased manifold in recent years.

 

What is High Availability for Network?

High availability (HA) can refer to the degree to which a network device, application, or other component in an IT infrastructure is operable and meeting service level objectives. Network Management Systems (NMS) are often used to track the performance and responsiveness of applications and the servers on which they run and to document that high availability is being maintained in a network. Network availability is typically included in SLAs established for IT departments. Hence, monitoring your network performance on a regular basis becomes a necessity.

 

How can an NMS help achieve HA?

Enterprises struggle to sustain their target network availability objectives when they try to scale up with their network, and when network bottlenecks occur it can be difficult to ensure network availability. Network Monitoring Systems can help administrators by monitoring and detecting any network issues. It helps them to determine the root cause of the issue and troubleshoot.

 

Network device availability can monitored by an NMS using ICMP to poll the devices for response time & status.  By monitoring various network performance indicators, such as disk space, CPU load, memory utilization, bandwidth utilization, packet loss, latency, errors, discards, quality of service for SNMP-enabled devices, an NMS can help administrators to make sure that nothing affects their network availability. For instance, monitoring hardware performance using an NMS will help you get alerted when the CPU is overloaded with tasks, which can risk an organization’s network performance and availability.

 

What if your NMS fails?

When network administrators monitor and analyze network performance or uptime, the quality of the network data entirely depends on the continuity of the NMS in their environment. To make the NMS available at all times, administrators must have a contingency plan in the event of a system failure.

A Failover solution can ensure NMS availability in a network environment if the server goes down, but only for a particular time period. This process is kept in place to provide monitoring of network availability for a short period, until network administrators solve the issue. There are three different approaches that can help administrators while implementing a failover protection.

  • Active – Active Solution: Administrators have two active servers which have their NMS up & running, and mirroring the application, database, etc.
  • Active – Passive Solution (High Availability): Administrators have the NMS running in an active/primary server and in the event of failure the secondary/passive server takes over with all processes for continuity. This is normally implemented over the LAN.
  • Active – Passive Solution (Disaster Recovery): Implemented over the WAN at the time of failure, a secondary/passive server in the different location takes over so to the end users they just continue to access the applications as they would normally use.

Normally, an NMS follows the Active – Passive Solution for achieving high network availability. On the other hand, Disaster Recovery is a back-up strategy especially when your failover solution falls apart. As businesses largely depend on the availability of various services, enterprises can use an NMS and Failover solution together for uninterrupted network monitoring and achieve high network availability.

 

NMS and Failover Protection

Network monitoring solution helps you to keep network’s high availability and with a Failover solution, network administrators can keep network up and running.

failover.png

SolarWinds Network Performance Monitor helps you monitor your network, proactively detect problems and helps you in troubleshooting. Now with a Failover Engine, network administrators can be sure that they never lose network visibility for a short time frame.

By now, you are no doubt aware that SAM 6.0 and NPM 10.6 are both out. Yes, we're pretty excited about it around here, and I want to highlight one particular feature you may have missed: web-based reporting.

 

Report Writer? Yeah, It Works, But...

Yeah, we've been giving you the ability to create solid, information-packed reports for years; it's basic network monitoring, and Report Writer was handling it. The process would go roughly as follows:

  1. Select a type of data (e.g. Historical Application or Node Availability, Interface Traffic, Active Alerts, etc)
  2. Select the devices you want to report on
  3. Apply some limited filtering and formatting
  4. Execute the report query

It was pretty straightforward, but somewhat limited: you couldn't include a chart of reported data, and you'd need to configure a specific resource just to see the report in the web console. Some of you let us know how limited reporting was, and we listened. After some significant effort we are quite pleased to give you web-based reporting.

 

Web-Based Reporting. Sounds Good. What is it?

Web-based reporting in SolarWinds products using Orion Platform 2013.2 and higher is a radical reworking of SolarWinds' established approach to IT management reporting. Previously, you designed and generated your reports in a separate application--Report Writer--and then you either printed them out, emailed them to interested parties, or linked to them from designated resources in the web console. Now, with web-based reporting you can construct reports directly from the Orion Web Console using the very same resources you have already been configuring and using in the web console. Any resource in the web console is eligible for inclusion in a new web-based report, and you are no longer limited to a single list of data, as you can include as many charts and resources as you want. Yes, it's pretty exciting. For a thorough walk-through of the process, check out the SolarWinds Orion Web-Based Reports Technical Reference and the chapter "Creating Reports in the Web Console" in the SolarWinds Orion.NPM Administrator Guide. After you've read up, or maybe even before, download your own evaluation of NPM 10.6 or SAM 6.0 and give it a try yourself.

IP Address Management (IPAM) Solutions at present, call for management capabilities beyond just IP Address Management.To enhance overall administrative effectiveness, network administrators look forward to consolidated IP, DHCP and DNS Management and Monitoring abilities.

 

So, taking a step beyond IPAM into the realm of DHCP and DNS of ‘DDI’, let’s examine a few practicalities involved in the monitoring and management of these services in your network.

 

How are DHCP and DNS important to IPAM?

 

Domain Name System (DNS) is the way in which domain names are located and translated into IP Addresses. DNS comprises of DNS zones and records that are individual to each DNS service.

 

These DNS zones and records often need to be updated, created and sometimes removed. While doing so manually,there are cases where these records get corrupted or contain incorrect information. Also, in large networks where multiple admins operate, there are chances that the subsequent tasks of IP Address allocation like updating DNS records may not be done. Worst case scenario is when the admin encounters a Website down or Domain Name not resolving issues and finds the cause to be a wrong IP in the DNS records.

 

Network admins managing BIND DNS, know the pain of using the command line interface every time they have to update changes to DNS zones and records.

 

Dynamic Host Configuration Protocol (DHCP) is a protocol that allows network administrators to centrally manage and automate the assignment of IP addresses to devices in a network. This eliminates the manual task of assigning IP Addresses and enhances the overall management of the IP administration.

 

In short, DHCP server takes care of central management and automation of tasks viz. allocation of IP addresses, lending out IPs to transient users and reclaiming addresses that are no longer in use. To set up a DHCP server, the admin carries out tasks of calculating subnets, determining scopes, splitting scopes, allocating IP reservations and so on.

 

Consolidated IP Address Management Solution

 

With DNS and DHCP services running in your network, IP address or IP space management can be combined and centralized to increase efficiency in managing these services from a single interface.

 

Some of the functions present in a consolidated IP Address Management, DHCP/DNS Monitoring and Management solution are:

 

  • Management - Visibility into scope utilization and management, provision to add new scopes and subnets, subnet scan and corresponding updates on the DHCP/DNS server. Add, edit or delete DNS servers, zones and records alongside IP address and DHCP information in the same integrated interface.
  • Monitoring - Monitor in real-time the availability and performance of DHCP and DNS services, viz. scope utilization, zone status, etc.

 

Benefits:

 

  • Manage and monitor in real-time DHCP/DNS services alongside IP space management.
  • Stop maintaining multiple management consoles and seamlessly integrate with existing DHCP/DNS environments.
  • Easily propagate DHCP/DNS changes to the servers.
  • No expensive investments in appliances requiring vendor expertise and costly upkeep.

 

One place to start looking at is SolarWinds IP Address Manager (IPAM). IPAM provides the flexibility of a user-friendly GUI to easily perform add/edit/delete functions for BIND DNS zones and records, thereby, eliminating the struggles of having to use the CLI. Download a trial version and start managing and monitoring Microsoft DHCP/DNS, BIND DNS, and Cisco DHCP services, including Cisco ASA devices - all from a unified,easy-to-use Web console.

Over the last 3-4 days, we have been discussing several revelations that came out of the SANS Security Survey 2013. We had a glimpse of how organizations used security reports, how well organizations were equipped to collect security data and correlate that for threat intelligence and more.

 

 

Now, that takes us to the final part of the findings:

  • Satisfaction of organizations on their current Security Analytics & Intelligence capabilities
  • Top IT Security investments planned by organizations for the future

 

IT SECURITY CAPABILITIES

Organizations tend to place too much focus on data protection, resulting in not monitoring the events logs on their network. Log messages like syslogs, server logs and system logs are the means to actionable result.

 

Typically, IT Security capabilities are measured based on these 3 factors:

 

  1. Ability to identify potential risks across your IT infrastructure
  2. Intelligence to identify anomalies and suspicious behavior in your network patterns
  3. Ability to respond in time

 

For this to happen, you need to have visibility across the security events on your network and the intelligence to correlate the suspicious activities.

 

 

But the mind-boggling results that came out of this survey were:

Picture2.png

Picture3.png

Need for a Security Solution

The above results clearly show the need for a strong security solution that would alert you when a specific security condition is encountered, troubleshoot issues and react to policy violations, perform event forensics and root cause analysis to identify suspicious behavior patterns and anomalies. This eventually leads to fact that organizations need to invest sensibly on secure their network, sensitive data and systems from potential threats and risks.

 

 

Here are some statistics that were revealed regarding the security investment intentions of the participant organizations.

Picture4.png

pic_5.png

 

How would an SIEM Solution help?

  • Event correlation for event context and actionable intelligence
  • Real-time analysis for immediate threat detection and mitigation
  • Advanced IT search to simplify event forensics and expedite root cause analysis
  • Built-in reporting to streamline security and compliance

 

 

SolarWinds Log & Event Manager (LEM) is a powerful SIEM security software that is highly affordable and an easy-to-deploy virtual appliance. It helps you collect, correlate, analyze log data and alert you in real time. Also with its Active Response technology, you can automate the incident responses.

 

Join us at Las Vegas for SANS Network Security 2013

Are you already there? Well, look for us at Booth 14. We'll be the ones with awesome t-shirts, buttons, and giveaways! Make sure that you stop by, have a chat with us and also check-out our line-up of security products. That’s not all, meet our security experts and attend live product demos! Come, get geeky!!

 

 

Visit: www.solarwinds.com/sans

sans_vegas.png

Typically IT admins spend a significant amount of time managing SQL servers because of the volume of data usage and the number of applications accessing SQL databases. This also makes it difficult to optimize SQL server performance. Let’s get into the details of what you need to optimize and how to improve the performance and availability of your SQL servers.

 

  • Index Fragmentation: SQL uses indexing to make data searches faster. When data is modified, the index contents scatter which causes fragmentation. Poor index fragmentation not only slows down database searches, it also requires more disk space usage which causes performance degradation.
  • Storage Capacity of Temporary Database: Every SQL server has a shared database called tempdb which is used for storing temporary user and internal objects. There can be bottlenecks in tempdb because there is only a single tempdb available for every instance. To avoid bottlenecks here and improve performance, you will have to optimize the size of the tempdb file and monitor it from time to time.
  • Top Expensive Queries: One bad query from an application using the SQL database will affect the performance of the whole database server itself—not just the individual database belonging to an application. Monitoring expensive queries alerts you to troubleshoot possible issues before the SQL server itself is brought down by a single query from one of the hosted applications.


Monitor SQL Servers and its Key Components


If you want to improve your database performance, you should monitor certain performance counters. These counters monitor the performance of your database server and the server hardware. They will come back and tell you the value of each counter and whether you need to take necessary steps to improve the performance.


  • Processor Time: Monitors the CPU load on the server.
  • Memory Utilization: You’ll know if there are memory bottlenecks which can ultimately lead to paging.

       memory_AppInsight.jpg

  • Storage Utilization: You can find out how much storage space is used and how much is free. This way you can allocate space accordingly.

        disk_AppInsight.jpg

  • Average Response Time: This counter will tell you the response time of the SQL server–what is the average response time for specific queries?
  • Page Life Expectancy: Monitoring this counter will tell you the time it takes before the average page is removed from the cache buffer. If this fails below the threshold then it is an indication that your SQL server may require more RAM to improve performance.
  • Blocked Queries: You can check of the number of queries the database server blocked. You’ll know they are blocked for a reason such as poorly written queries, queries that take too long to respond, queries affected by CPU time, efficient queries that are slowed down by slower queries, etc.
  • Transactions per Second: You will get to know the number of database transactions that have started every second.


Advantages of Monitoring SQL Server Performance Counters


  • Keeps you aware of the performance and the availability of the database server at any given time.
  • Increases the effectiveness of the database server.
  • Avoid any kind of performance bottlenecks.
  • Scalability to monitor more databases and instances.
  • Helps you maintain your server hardware and keep it healthy.


Watch this video to find out how to optimize and improve your SQL server performance.

 

In the earlier blogs of this series, we saw the snapshot of the SANS security survey showing how prepared organizations are in the methos of threat detection and response, and generating security reports. Now, we’ll see how organizations are equipped to collect security data and correlate that for threat intelligence.

 

A startling result that we found from the survey is that,

Image 1.png

 

Security data collection is the process of being able to gather logs from across your IT infrastructure including network devices, security appliances, servers, workstations, virtual machines and databases. It doesn’t just stop with log data collection, but focuses on how quickly the logs are collected. There are attacks that cause havoc even in a few minutes, and not being able to collect data and gain knowledge of the attack will be more detrimental to your secure data and systems.

 

Log collection is just the first step. Once the data is collected, there should be a relevant mechanism to process the logs, normalize them, correlate them as quickly as possible and generate intelligence to diagnose anomalies in network patterns and isolate suspicious events. Real-time event log correlation could be most efficient only when it is automated and happens in-memory so that threat analysis becomes extremely fast.

Another interesting survey finding was that,

Image 2.png

  

This is a challenge for security admins as they cannot manually search for logs and hope to respond to attack on time. Security information & event management (SIEM) tools help you automate event log correlation and alert you when there’s a security threat of policy violation. Log management becomes simpler and automated when you use an external SIEM software. The survey showed that,

Image 3.png

 

Your security needs will not be fulfilled if you are not ready to invest in a security solution that’s going to help protect your network, systems and sensitive data. Organizations need more awareness and learning on how an automated SIEM system can help you gain real-time visibility into the security and operational events in your network. Try SolarWinds Log & Event Manager, a full-functional SIEM software, that collects and correlates log data and alerts you in real time, and helps remediate threats with built-in automated incident responses.

 

Join SolarWinds at SANS Network Security 2013 Las Vegas

You are invited to stop by at booth No. 14 TODAY (September 18th, 2013) to meet our security experts and geeks, and attend live product demos and find a solution to your security challenges. And yes, there is a lot of cool geek gear to grab and wear – complimentary of course!

Visit: www.solarwinds.com/sans

SANS event.png

Filter Blog

By date:
By tag: