1 2 3 4 5 Previous Next

Product Blog

712 posts

We’re happy to announce the release of SolarWinds® Exchange Monitor, a standalone free tool that provides a view of your Microsoft® Exchange Server performance.

Exchange Monitor gives your team insight into metrics, services and Database Availability Group (DAG) status. This tool supports Exchange Server 2013 and 2016.

 

Metrics monitored by Exchange Monitor:

  • Total Service IOPS (count)
  • Host Used CPU (%)
  • Host Used Disk (%)
  • Host Used Memory (%)
  • I/O Log Writes Average Latency (ms)
  • I/O Log Reads Average Latency (ms)
  • I/O Database Writes Average Latency (ms)
  • I/O Database Reads Average Latency (ms)
  • Outstanding RPC Requests (count)
  • RPC Requests Failed (%)

Services monitored by Exchange Monitor:

  • Active Directory Topology
  • DAG Management
  • Exchange Replication
  • Information Store
  • Mailbox Assistants
  • Mailbox Replication
  • Mailbox Transport Delivery
  • Mailbox Transport Submission
  • Search
  • Service Host
  • Transport
  • Unified Messaging

DAG info:

  • Databases (count)
  • Current Size (GB)
  • Total DB Copies (count)
  • Disk Free Space (GB)
  • List of servers belonging to the DAG and their health status

 

 

Exchange Monitor gives you the ability to add as many Exchange Servers as you wish. Simply click the “Add Server” button and fill IP address/domain name and credentials. The only limitation when adding a new server is, that you cannot add 2 servers from the same DAG. Monitoring multiple servers from the same DAG can be achieved only with Server & Application Monitor (SAM).

Once you add a server to Exchange Monitor, it will load polling interval, metrics with thresholds and services to monitor from global settings. You can edit these settings locally – per server. When looking at server details page, click the COMMANDS button in the top – right corner and select “Manage Metrics”.

 

You can then fine-tune your thresholds for this server only. Here you can also turn off metric, if you do not wish to monitor it. When you do not monitor a metric, you will not get any notifications about this metric.

 

If an Exchange Server belongs to DAG, you’ll see Database Availability Group (DAG) resource on the server details screen. Bottom part of this resource contains information about all servers belonging to this DAG and their health status. You can inspect status of these servers by clicking on “Show All” button.

 

Hovering over these servers will give you additional information about Content Index State and Copy status of the databases assigned to this server.

 

 

 

What else does Exchange Monitor do?

  • Allows you to add as many Exchange servers as you wish (with limitation to monitor only 1 server per DAG)
  • Creates notification every time, when metric goes over threshold, service goes into non-running state or DAG is not healthy

       

 

  • Can write all notifications into Event Log.
  • Allows you to set your polling interval (how frequently you want to receive data from Exchange Server) per server
  • Allows you to set thresholds, metrics to monitor and services to monitor per server

    

 

 

For more detailed information about Exchange Monitor, please see the SolarWinds Exchange Monitor Quick Reference guide here on THWACK®: https://thwack.solarwinds.com/docs/DOC-191309

Download SolarWinds Exchange Monitor: http://www.solarwinds.com/exchange-monitor

Don't have time to read this article and just want to try out the example template?  You can download it here Docker Basics for Linux

 

Overview

As I discuss cloud utilization with customers, one subject that often comes up is Docker.  Many customers are using cloud servers as Docker farms, or to supplement their on-premise docker footprint.  OH!  There's that hybrid IT thing again.

 

Since some of our readers may be devops people without experience in SAM, I'm going to explain every step in detail.

 

We watch applications in SAM with templates.  An application template is simply a collection of one or more component monitors. Today we are going to create a simple Docker template as an example of what you can do to monitor docker.

 

To monitor Docker we are going to:

1.  Build a Docker Server

2. Use the Component Monitor Wizard to create component monitors for critical Docker processes and generate the base template.

3. Create a script Component Monitor to query Docker for critical metrics and insert that Component Monitor into the template from step 2.

 

Build a Docker Server (step 1 of 3)

First we need to load Docker on a target server.  I won't bore you with instructions about that.  If you need help installing, refer to the documentation from Docker themselves at Docker Documentation | Docker Documentation .

 

The next step is to manage the node where we installed Docker with SAM (if it wasn't already managed).  If you're using Linux, I'd recommend installing the Linux agent when you set up the node.  Again, this is well documented in the SolarWinds documentation starting here Agent deployment .

 

Now that we've got a Linux node with the Solarwinds agent and Docker installed, we are ready to build a template to monitor it.

 

Start with a Process Monitor type component monitor (step 2 of 3)

In order to monitor an application we have to decide what to look at on the target node.  I'm talking about basic availability monitoring by ensuring that the service/process is up.  We accomplish this with either a service or process component monitor.  Using a Service or Process monitor is a best practice when creating a new template.

 

SAM offers Windows Service monitors along with Windows or Linux Process monitors.  In this example I'm targeting an Ubuntu server running Docker, so I'll use a Linux Process Monitor.

 

The easiest way to create a component monitor is using the Component Monitor Wizard.  Sometimes new users miss this option because it's located at Settings->All Settings->SAM Settings->Getting Started with SAM. 

 

 

In the wizard, I will choose Linux Process Monitor and click "Next"

 

Now we get to the reason it was so important to have an application node set up and defined in SAM before we started.  The next screen allows us to enter the IP address of the Linux server I set up earlier.  In this case, it's a 64-bit operating system. Be sure to change the 32-bit setting to 64-bit.  Once again, click "Next"

 

This is where the power of the wizard becomes evident.  You will see a list of the processes on the target server with empty checkboxes next to them.  Just check the processes associated with Docker (in this case dockerd and docker-containerd), then click "Next".

(For more about why Docker split the docker engine into two pieces, take a look at Docker containerd integration - Docker Blog .)

 

Now our component monitors have been created.  You can optionally check the boxes to the left and test them now.  Since the wizard actually had found the processes previously on the test node, the test should pass.  Just click "Next" again.

You can either create a new application template with these component monitors or add them to an existing one.  In this case we are going to create a new template called "Docker Basics for Linux".  It's a good idea to include the OS type in the name since we might run Docker on Windows in our environment later and we will want to know that this particular template is for Linux.  Also I included the term "Basics" since I get ambitious and write a more complex template later.

 

Click "Next" to continue to a screen where we can assign the template to SAM nodes.  The node we used for developing the template should be checked by default.  Just click "Next" again if that's the only node we are monitoring for now.

 

The final step is to confirm the creation of the component monitors.  Click "OK, Create" and you're done.

If you go to the SAM Summary page, you'll see your template has been applied.  Don't worry if it shows an "unknown" state at first.  It can take a few minutes to do its initial polling.  It should turn green once discovered.

That's it.  You're now monitoring Docker availability.

 

If you wanted to monitor Docker on Windows, you could have used a Windows Process Monitor or a Windows Service Monitor, which both work similarly.  This would provide the most basic information at whether Docker is even running on a target system.

 

Monitor Docker from the inside with a Script Component Monitor (Step 3 of 3)

For the next step, let's add some visibility into Docker itself.

Go to Settings->All Settings->SAM Settings->Manage Templates

You should see a list of all templates on your SAM system.  In the search box at the upper right corner, type "docker" and click "Search".  This should filter the list to only include the Docker template we created above.

Check the checkbox to the left of our template and click "Edit"

 

You should see the details of the template with the 2 component monitors we created earlier.

At this point, you can add a description and tags, or change the polling settings.  You can also change component monitors in the template.

Let's add a component monitor to show basic stats about Docker.  The easiest way to get those stats is to run the "docker info" command on the Linux server.

Wow!  That's a lot of information.  But how to do we get it into SAM?

Not to worry, SAM lets us run scripts and can collect up to 10 statistic/message pairs from the script.

Just select "Add Component Monitor" on the screen above and click "Submit".

 

Then select "Linux/Unix Script Monitor" (you may have to advance to the 2nd page of results) and click "Add".

Now you should see a new component monitor on the list that is opened for editing.

Enter a description and choose the authentication method and credentials. In this case I chose to inherit credentials from the node.  The script is basically run using ssh so port 22 must be open in any firewall configuration.

 

I entered a working directory of "/tmp" for any disk I/O required by SAM.

 

I also will rename the component monitor to something more meaningful.

 

The "command line" field will be set up for a perl script by default.  We are using a shell script so we will delete the "perl" part and replace it with "bash".

Next, click "Edit Script" to enter the actual script commands.

 

Insert a statistic into SAM with a bash script

In order to use a script with SAM, we are going to need to format the data in a very specific way.  SAM expects the only output from a script to consist of key/value pairs, separated by a ":", that either are a statistic or a message.  The key for statistics consists of the keyword "statistic" then a variable name, then ":"

For example if we were trying to capture number of containers, we would use Statistic.container:1 and for a description we would use Message.container:Containers This would allow SAM to display a value of 1 with a heading of "Containers".  The variable "container" would tie the two together.  That's a brief explanation of getting statistics into SAM.  I could write an entire article on Script Component Monitors alone.  Oh, look!  Someone already did... SAM Script Component Monitors - Everything you need to know

 

This would be pretty simple if we only wanted the single "Containers" line from the "docker info" command.  We could just pipe the output of the "docker info"  through "grep" to find the line that contains "Containers" and then pipe that line through "sed" to substitute our special identifying text for the heading "Containers: "  To see how this works type this on the Linux command line:

 

 

docker info | grep "Containers: "| sed s/"Containers: "/"Message.container:Containers \nStatistic.container:"/

 

 

Which would output:

 

Message.container:Containers

Statistic.container:1

 

Show more than one statistic

However if we want to grab more than one line from "docker info", it gets a bit more complicated.

First, to avoid having to grab multiple lines, I decided to just have docker send the output in JSON format.  This can be done by using the format option:

 

docker info --format '{{json .}}'

 

which returned:


 

Now we have the output in a single, JSON-formatted line. 

 

Now to create a poor-man's JSON parser.  I simply used awk to grab the data I was interested in.  I didn't use a JSON parser like "jq" because I didn't want the burden of ensuring that this code was installed on the target system. 

 

Here is the script that I used:

 

#! /bin/bash
JSON="$(/usr/bin/docker info --format '{{json .}}')"
echo $JSON| awk -F ','  '{print $3}' | awk -F ':'  '{print "Message.running:"$1,"\nStatistic.running: "$2}'
echo $JSON| awk -F ','  '{print $4}' | awk -F ':'  '{print "Message.paused:"$1,"\nStatistic.paused: "$2}'
echo $JSON| awk -F ','  '{print $5}' | awk -F ':'  '{print "Message.stopped:"$1,"\nStatistic.stopped: "$2}'
echo $JSON| awk -F ','  '{print $6}' | awk -F ':'  '{print "Message.images:"$1,"\nStatistic.images: "$2}'

This grabs the 3rd, 4th, 5th and 6th objects in the JSON and plugs them into SAM as message and statistic respectively.  At this point I have to admit that there is some risk that Docker could change the output of this file and break my positional parsing, while not affecting someone using a real JSON parser to search for the proper key/value pair.  You can test the script from the Linux command line to be sure that it works.

 

Now that we've got a working script on the Linux server, we can insert it into the SAM Linux script component monitor.

Here's a screen shot of the script being put into SAM

 

A couple of things to notice on this screenshot.

  • I couldn't run the "docker info" command as a normal user.  I had to use "sudo" to run it as super-user.  This required me to modify my "sudoers" file to add the "NOPASSWD:ALL" option for the user whose credentials I'm using.  This change allowed SAM to run a privelidged command without being prompted again for the password.  More on that here How To Monitor Linux With SSH And Sudo
  • I used a fully qualified path for the docker executable.  This is to ensure that I don't have issues due to the user's PATH statement.

 

Here are some screenshots of the monitor as I spun up some images and containers on the target system.

.

That's it!  You're monitoring Docker.

 

For those of you running Docker on Windows.  Most of the Docker Info commands are similar, if not the same and a script component monitor could be written in PowerShell.

The Center for Internet Security (CIS) provides a comprehensive security framework called The CIS Critical Security Controls (CSC) for Effective Cyber Defense, which provides organizations of any size with a set of clearly defined controls to reduce their risk of cyberattack and improve their IT security posture. The framework consists of 20 controls to implement, however, according to CIS, implementation of the first first five controls provides effective defense against 85% of the most common cyberattacks. CIS provides guidance on how to implement the controls and which tools to use to reduce the burden on security teams. Without these controls, those teams have to spend time deciphering the meaning and objective of each critical security control.

 

SolarWinds offers several tools that provide the capabilities to implement many of the CIS Controls. In this post, I'm going to break down each Critical Security Control and discuss how SolarWinds® products can assist.

 

Critical Security Control 1: Inventory of Authorized and Unauthorized Devices

Actively manage (inventory, track, and correct) all hardware devices on the network so that only authorized devices are given access, and unauthorized and unmanaged devices are found and prevented from gaining access.

Asset Discovery is an important step in identifying any unauthorized and unprotected hardware being attached to your network. Unauthorized devices undoubtedly pose risks and must be identified and removed as quickly as possible.

 

    

SolarWinds User Device Tracker (UDT) enables you to detect unauthorized devices on both your wired and wireless networks. Information such as MAC address, IP address, and host name can be used to create blacklists and watch lists. UDT also provides the ability to disable the switch port used by a device, helping to ensure that access is removed.

 

Critical Security Control 2: Inventory of Authorized and Unauthorized Software

Actively manage (inventory, track, and correct) all software on the network so that only authorized software is installed and can execute, and that unauthorized and unmanaged software is found and prevented from installation or execution.

As the saying goes, you don’t know what you don’t know. Making sure that software on your network is up to date is essential when it comes to preventing against attacks on known vulnerabilities. It’s very difficult to keep software up to date if you don’t know what software is running out there. 

 

 

SolarWinds Patch Manager can create an inventory of all software installed across your Microsoft® Windows® servers and workstations. Inventory scans can be run ad-hoc or on a scheduled basis, with software inventory reports scheduled accordingly. Patch Manager can also go a step further and uninstall unauthorized software remotely. CSC2 also mentions preventing the execution of unauthorized software. SolarWinds Log and Event Manager can be leveraged to monitor for any non-authorized processes and services launching and then blocking them in real-time.

 

 

Critical Security Control 3: Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers

Establish, implement, and actively manage (track, report on, and correct) the security configuration of laptops, servers, and workstations using a rigorous configuration management and change control process to prevent attackers from exploiting vulnerable services and settings.

Attackers prey on vulnerable configurations. Identifying vulnerabilities and making necessary adjustments helps prevent attackers from successfully exploiting them. Change Management is critical to helping ensure that any configuration changes made to devices do not negatively impact the their security.

 

 

Lacking awareness of access and changes to important system files, folders and registry keys can threaten device security. SolarWinds LEM includes File Integrity Monitoring, which monitors for any alterations to critical system files and registry keys that may result in insecure configurations. LEM will notify you immediately to any changes, including permission changes.

 

CSC 4: Continuous Vulnerability Assessment and Remediation

Continuously acquire, assess, and take action on new information in order to identify vulnerabilities, remediate, and minimize the window of opportunity for attackers.

 

 

 

To help remediate vulnerabilities, we need to first identify those that already exist on your network. SolarWinds Risk Intelligence includes host-based vulnerability scanning capabilities. Risk Intelligence leverages the CVSS database to uncover the latest threats. If vulnerabilities are identified as a result of outdated software and missing OS updates, SolarWinds Patch Manager can be used to apply those updates to remediate the vulnerabilities. If you have a vulnerability scanner such as Nessus®, Rapid7® or Qualys® - LEM can parse event logs from these sources to alert on detected vulnerabilities and correlate activity. SolarWinds Network Configuration Manager can help to identify risks to network security and reliability by detecting potential vulnerabilities in Cisco ASA® and IOS®-based devices via integration with the National Vulnerability Database. You can even update the firmware on IOS-based devices to remediate known vulnerabilities.

 

CSC 5: Controlled Use of Administrative Privileges

The processes and tools used to track/control/prevent/correct the use, assignment, and configuration of administrative privileges on computers, networks, and applications.

Administrative privileges are very powerful and can cause grave damage when those privileges are in the wrong hands. Administrative access is the Holy Grail for any attacker. As the control states, administrative privileges needs be tracked, controlled and prevented. A SIEM tool such as SolarWinds LEM can and should be used to monitor for privileged account usage. This can include monitoring authentication attempts, account lock outs, password changes, file access/changes and any other actions performed by administrative accounts. SIEM tools can also be used to monitor for new administrative account creation and existing accounts being granted privileged escalation. LEM includes real-time filters, correlation rules and reports to assist with the monitoring of administrative privileges.

 

 

CSC 6: Maintenance, Monitoring, and Analysis of Audit Logs

Collect, manage, and analyze audit logs of events that could help detect, understand, or recover from an attack.

As you've probably guessed by the title, this one has Security Information and Event Management (SIEM) written all over it. Collecting and analyzing your audit logs from all the devices on your network can greatly reduce your MTTD (mean time to detection) when an internal or external attack is taking place. Collecting logs is only one part of the equation. Analyzing and correlating event logs can help to identify any suspicious patterns of behavior and alert/respond accordingly. If an attack takes place, your audit logs are like an evidence room. They allow you to put the pieces of the puzzle together, understand how the attack took place, and remediate appropriately. SolarWinds LEM is a powerful SIEM tool that includes such features as log normalization, correlation, active response, reporting, and more.

 

 

CSC 7: Email and Web Browser Protections

Minimize the attack surface and the opportunities for attackers to manipulate human behavior though their interaction with web browsers and email systems

According to a recent study from Barracuda®, 76% of ransomware is distributed via e-mail. Web Browsers are also an extremely popular attack vendor, from scripting languages, such as ActiveX® and JavaScript®, unauthorized plug-in's, vulnerable out-of-date browsers, and malicious URL requests. CSC7 focuses on limiting the use of unauthorized browsers, email clients, plugins, scripting languages and monitoring URL requests.

SolarWinds Patch Manager can identify and uninstall any unauthorized browsers or e-mail clients installed on servers and workstations. For authorized browsers and e-mail clients such as Google® Chrome® , Mozilla® Firefox®, Internet® Explorer®, Microsoft® Outlook® and Mozilla® Thunderbird®, Patch Manager can help ensure that they are up-to-date. LEM can take it a step further and block any unauthorized browsers and e-mail clients from launching, thanks to it's "kill process" active response. LEM can also collect logs from various proxy and content filtering appliances to monitor for URL requests. This also helps validate any blocked URL requests.

 

CSC 8: Malware Defenses

Control the installation, spread, and execution of malicious code at multiple points in the enterprise, while optimizing the use of automation to enable rapid updating of defense, data gathering, and corrective action.

LEM can integrate with a wide range of Anti-Virus and UTM Appliances to monitor for malware detection and respond accordingly. LEM also provides threat feed integration to monitor for communication with bad known actors associated with malware and other malicious activity. Control 8.3 involves limiting the use of external devices such as USB thumb drives and hard drives. LEM includes USB Defender® technology, which monitors the USB storage device usage and detach any unauthorized usage.

 

 

 

CSC 9: Limitation and Control of Network Ports, Protocols, and Services

Manage (track/control/correct) the ongoing operational use of ports, protocols, and services on networked devices in order to minimize windows of vulnerability available to attackers.

Attackers are constantly scanning for open ports, vulnerable services, and protocols in use. The principle of least privilege should be applied to ports, protocols and services - if there isn't a business need for the, they should be disabled. When people talk about ports they generally think of checking perimeter devices, such as firewalls, but internal devices such as servers should also be taken into to consideration when tracking open ports, enabled protocols, etc.

 

 

SolarWinds provides a free tool that can scan available IP addresses and their corresponding TCP and UDP ports in order to identify any potential vulnerabilities. The tool is aptly named SolarWinds Port Scanner. Network Configuration Manager can be used to report on network device configuration to identify any vulnerable or unused ports, protocols and services running on your network devices. Netflow Traffic Analyzer (NTA) can also be used to monitor flow data in order to identify traffic flowing across an individual port or a range of ports. NTA also identifies any unusual protocols and the volume of traffic utilizing those protocols. Finally, LEM can monitor for any unauthorized services launching on your servers/workstations, as well as monitoring for traffic flowing on specific ports based on syslog from your firewalls.

 

CSC 10: Data Recovery Capability

The processes and tools used to properly back up critical information with a proven methodology for timely recovery of it.

Currently, ransomware attacks take place every 40 seconds, which means that data backup and recovery capabilities are incredibly critical. CSC10 involves ensuring backups are taking place on at least a weekly basis, and more frequently for sensitive data. Some of the controls in this category also include testing backup media and restoration processes on a regular basis as well as ensuring backups and protected via physical security or encryption. SolarWinds MSP Backup & Recovery can assist with this control. Server & Application Manager can validate that backup jobs are sucessful, thanks to application monitors for solutions such as Veaam® Backup and Symantec® BackupExec®.

 

CSC 11: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches

Establish, implement, and actively manage (track, report on, correct) the security configuration of network infrastructure devices using a rigorous configuration management and change control process in order to prevent attackers from exploiting vulnerable services and settings.

This critical control is similar to CSC3, which focuses on secure configurations for servers, workstations, laptops, and applications. CSC11 focuses on the configuration of network devices, such as firewalls, routers and switches. Network devices typically ship with default configurations including default usernames, passwords, SNMP strings, open ports, etc. All of these configurations should be amended to help ensure that attackers cannot take advantage of default accounts and configurations. Device configuration should also be compared against secure baselines for each device type. CSC11 also recommends that an automated network configuration management and change control system be in place, enter NCM.

 

 

NCM is packed with features to assist with CSC11 including real-time change detection, configuration change approval system, Cisco IOS® firmware updates, configuration baseline comparisons, bulk configuration changes, DISA STIG reports and more.

 

CSC 12: Boundary Defense

Detect/prevent/correct the flow of information transferring networks of different trust levels with a focus on security-damaging data.

There is no silver bullet when it comes to boundary defense to detect and prevent attacks and malicious behavior. Aside from firewalls, technologies such as IDS/IPS, SIEM, Netflow and web content filtering can be used to monitor traffic at the boundary to identify any suspicious behavior. SolarWinds LEM can ingest log data from sources such as IDS/IPS, firewalls, proxies, and routers to identify any unusual patterns, including port scans, ping sweeps, and more. NetFlow Traffic Analyzer can also be used to monitor both ingress and egress traffic to identify anomalous activity.

 

CSC 13: Data Protection

The processes and tools used to prevent data exfiltration, mitigate the effects of exfiltrated data, and ensure the privacy and integrity of sensitive information.

 

Data is one of every organizations most critical assets and needs to be protected accordingly. Data exfiltration is one of the most common objectives of attackers, so controls need to be in place to prevent and detect data exfiltration. Data is everywhere in organizations. One of the first steps to protecting sensitive data involves identifying the data that needs to be protected and where it resides.

 

 

SolarWinds Risk Intelligence (RI) is a product that performs a scan to discover personally identifiable information and other sensitive information across your systems and points out potential vulnerabilities that could lead to a data breach. The reports from RI can be helpful in providing evidence of due diligence when it comes to the storage and security of PII data. Data Loss Prevention and SIEM tools can also assist with CSC13. LEM includes File Integrity Monitoring and USB Defender which can monitor for data exfiltration via file copies to a USB drive. LEM can even automatically detach the USB device if file copies are detected, or even detach it as soon as its inserted to the machine. LEM can also audit URL requests to known file hosting/transfer and webmail sites which may be used to exfiltrate sensitive data.

 

CSC 14: Controlled Access Based on the Need to Know

The processes and tools used to track/control/prevent/correct secure access to critical assets (e.g., information, resources, systems) according to the formal determination of which persons, computers, and applications have a need and right to access these critical assets based on an approved classification.

As per the control descrption 'job requirements should be created for each user group to determien what information the group needs access to in order to perform its jobs. Based on the requiremnts, access should only be given to the segments or servers that are needed for each job function.' Basically, provide users with the appropriate level of access required by their role, but don't give them access beyond that. Some of the controls in this section involve network segmentation and encrypting data in transit over less-trusted networks and at rest. CIS also recommends enforcing detailed logging for access to data. LEM can ingest these logs to monitor for authentication events and access to sensitive information. File Integrity Monitoring includes the ability to monitor for inappropriate file access, including modifications to permissions. LEM can also monitor Active Directory® logs for any privileged escalations to groups such as Domain Admins.

 

 

CSC 15: Wireless Access Control

The processes and tools used to track/control/prevent/correct the security use of wireless local area networks (LANS), access points, and wireless client systems.

Wireless connectivity has become the norm for many organizations, and just like wired networks, access needs be controlled. Some of the sub-controls in this section involve creating VLANs for BYOD/untrusted wireless devices, helping to ensure that wireless traffic leverages WPA2 and AES as well as identifying rogue wireless devices and access points.

 

 

Network Performance Monitor and User Device Tracker can be used to identify rogue access points and unauthorized wireless devices connected to your WLAN. brad.hale has a great blog post on the topic of monitoring rogue access points here.

 

CSC 16: Account Monitoring and Control

Actively manage the life cycle of system and application accounts their creation, use, dormancy, deletion in order to minimize opportunities for attackers to leverage them.

Account management, monitoring and control is vital to making sure that accounts are being used for their intended purposes and not for malicious intent. Attackers tend to prefer leveraging existing, legitimate accounts rather than trying to discover vulnerabilities to exploit. It saves a lot of time and effort. Outside of having clearing defined account management policies and procedures, having a SIEM in place, like LEM, can go a long way to detecting potentially compromised or abused accounts.

 

LEM includes a wide range of out-of-the-box content to assist you with Account Monitoring and Control, including filters, rules and reports. You can easily monitor for events such as:

 

  • Account creation
  • Account lockout
  • Account expiration (especially important when an employee leaves the company)
  • Escalated privileges
  • Password changes
  • Successful and failed authentication

 

Active Response is also included, which can respond to these events via actions, such as automatically disabling an account, removing from a group, and logging users off.

 

CSC 17: Security Skills Assessment and Appropriate Training to Fill Gaps

For all functional roles in the organization (prioritizing those mission-critical to the business and its security), identify the specific knowledge, skills, and abilities needed to support defense of the enterprise; develop and execute an integrated plan to assess, identify gaps, and remediate through policy, organizational planning, training, and awareness programs.

You can have all the technology, processes, procedures and governance in the world, but your IT Security is only as good as its weakest link - and that is people. As Dez always says, "Security is not an IT problem, it's everyone's problem." A security awareness program should be in place in every organization regardless of its size. Users need to be educated on the threats they face everyday, for example social engineering, phishing attacks, and malicious attachments. If users are equipped with this knowledge and are aware of threats and risks, they are far more likely to identify, prevent, and alert on attacks. Some of the controls included in CSC17 include performing a gap analysis of users IT security awareness, delivering training (preferably from senior staff), implementing a security awareness program, and validating and improving awareness levels via periodic tests. Unfortunately, SolarWinds doesn't provide any solutions that can train your users for you, but know that we would if we could!

 

CSC 18: Application Software Security

Manage the security life cycle of all in-house developed and acquired software in order to prevent, detect, and correct security weaknesses.

Attackers are constantly on the look out for vulnerabilities to exploit. Security practices and processes must be in place to identify and remediate vulnerabilities in your environment. There are an endless list of possible attacks that can capitalize on vulnerabilities, including buffer overflows, SQL injection, cross-site scripting, and many more. For in-house developed applications, security shouldn't be an afterthought that is simply bolted on at the end. It needs to be considered at every stage of the SDLC. Some of the sub-controls within CSC18 address this with controls, including error checking for in-house apps as well as testing for weaknesses and ensuring that development artifacts are not included in production code.

 

CSC 19: Incident Response and Management

Protect the organization’s information, as well as its reputation, by developing and implementing an incident response infrastructure (e.g., plans, defined roles, training, communications, management oversight) for quickly discovering an attack and then effectively containing the damage, eradicating the attacker’s presence, and restoring the integrity of the network and systems.

An incident has been identified. Now what? CSC19 focuses on people and process rather than technical controls. This critical control involves helping to ensure that written incident response procedures are in place, making sure that IT staff are aware of their duties and responsibilities when an incident is detected. It's all well and good to have technical controls, such as SIEM, IDS/IPS and Netflow in place, but they need to be backed up with an incident response plan once an incident is detected.

 

CSC 20: Penetration Tests and Red Team Exercises

Test the overall strength of an organization’s defenses (the technology, the processes, and the people) by simulating the objectives and actions of an attacker.

Now that you've implemented the previous 19 Critical Security Controls, it's time to test them. Testing should only take place once your defensive mechanisms are in place. Testing needs to be an ongoing effort, just not a once off. Environments and the threat landscape are constantly changing. Some of the controls within CSC20 include vulnerability scanning as the starting point to guide and focus penetration testing, conducting both internal and external penetration tests and documenting results.

 

I hope at this point you now have an understanding of each Critical Security Controls and some of the ways in which SolarWinds tools can assist. While it may seem like a daunting exercise to implement all 20 controls, it's worth casting your mind back to the start of this post, whereby I mentioned that by implementing even the first five critical controls, provides effective defense against 85% of cyberattacks.

 

I hope that you've found this post helpful. I look forward to hearing about your experiences and thoughts on the CIS CSC's in the comments.

AppStack is a very useful feature for troubleshooting application issues, which you can find when you click on Environment under the Home menu in the Orion® Web Console. The feature allows you to understand your application environment, through exposing the relationships between the applications and the related infrastructure that supports those applications. The more SolarWinds System Management products you have installed, the more information you are provided in the AppStack UI. From applications to databases, virtual hosts to storage arrays, you can get a complete picture of the entire “application stack”. You can learn more about the basics of using AppStack by watching this video.

A common question I hear is, “How does AppStack know what is related to what?” Well, AppStack uses concepts like automatic end-to-end mapping to automatically determine relationships between things like datastores and LUNs. But, AppStack also utilizes user-defined dependencies to allow you to specify relationships between particular monitored entities. For example, you can create a user-defined dependency between the Microsoft IIS application you are monitoring, and the Microsoft SQL Server that it relies on for business critical functionality. To configure the user-defined dependencies, click on Manage Dependencies on the Main Settings & Administration page.

When you select the Microsoft IIS application, you created the dependency for, in the AppStack UI, you are able to not only see the infrastructure stack that supports that particular web server, but you can also quickly see the other applications or servers that it depends on.

You can use the Spotlight function in AppStack to quickly narrow down the visibility to only the object you select and the objects that are related to it, including the user-defined dependencies. In this case, you can see the IIS server and the SQL server and the related infrastructure stack. Both the objects related to the IIS application and the SQL Server application will be shown.

Once you build out your dependencies, you will be able to quickly traverse from one monitored application to another, and gain a complete understanding of your complex application environment. In the example of the IIS application and SQL server, you can select the SQL server and see what other applications are dependent on it.

As you continue to build out your user-defined dependencies, you will be able to quickly traverse all of the relationships between the applications you monitor and the other monitored objects in your environment, like web transactions. This will, in turn, allow you to determine root cause of application issues faster, by giving you have better visibility into the entire application and infrastructure landscape.

PoShSWI.png

In my previous posts, I talked about building the virtual machine and then about prepping the disks.  That's all done for this particular step.

 

This is a long set of scripts.  Here's the list of what we'll be doing:

  1. Variable Declaration
  2. Installing Windows Features
  3. Enabling Disk Performance Metrics
  4. Installing some Utilities
  5. Copying the IIS Folders to a new Location
  6. Enable Deduplication (optional)
  7. Removing unnecessary IIS Websites and Application Pools
  8. Tweaking the IIS Settings
  9. Tweaking the ASP.NET Settings
  10. Creating a location for the TFTP and SFTP Roots (for NCM)
  11. Configuring Folder Redirection
  12. Pre-installing ODBC Drivers (for SAM Templates)

 

Stage 1: Variable Declaration

This is super simple (as variable declarations should be)

#region Variable Declaration
$PageFileDrive = "D:\"
$ProgramsDrive = "E:\"
$WebDrive      = "F:\"
$LogDrive      = "G:\"
#endregion

 

Stage 2: Installing Windows Features

This is the longest part of the process.. and it can't be helped.  The Orion installer will do this for you automatically, but if I do it in advance, I can play with some of the settings before I actual perform the installation.

#region Add Necessary Windows Features
# this is a list of the Windows Features that we'll need
# it's being filtered for those which are not already installed
$Features = Get-WindowsFeature -Name FileAndStorage-Services, File-Services, FS-FileServer, Storage-Services, Web-Server, Web-WebServer, Web-Common-Http, Web-Default-Doc, Web-Dir-Browsing, Web-Http-Errors, Web-Static-Content, Web-Health, Web-Http-Logging, Web-Log-Libraries, Web-Request-Monitor, Web-Performance, Web-Stat-Compression, Web-Dyn-Compression, Web-Security, Web-Filtering, Web-Windows-Auth, Web-App-Dev, Web-Net-Ext, Web-Net-Ext45, Web-Asp-Net, Web-Asp-Net45, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Mgmt-Tools, Web-Mgmt-Console, Web-Mgmt-Compat, Web-Metabase, NET-Framework-Features, NET-Framework-Core, NET-Framework-45-Features, NET-Framework-45-Core, NET-Framework-45-ASPNET, NET-WCF-Services45, NET-WCF-HTTP-Activation45, NET-WCF-MSMQ-Activation45, NET-WCF-Pipe-Activation45, NET-WCF-TCP-Activation45, NET-WCF-TCP-PortSharing45, MSMQ, MSMQ-Services, MSMQ-Server, FS-SMB1, User-Interfaces-Infra, Server-Gui-Mgmt-Infra, Server-Gui-Shell, PowerShellRoot, PowerShell, PowerShell-V2, PowerShell-ISE, WAS, WAS-Process-Model, WAS-Config-APIs, WoW64-Support, FS-Data-Deduplication | Where-Object { -not $_.Installed }
$Features | Add-WindowsFeature
#endregion

 

Without the comments, this is 2 lines.  Yes, only 2 lines, but very important ones.  The very last Windows Feature that I install is Data Deduplication (FS-Data-Deduplication).  If you don't want this, you are free to remove this from the list and skip Stage 6.

 

Stage 3: Enabling Disk Performance Metrics

This is something that is disabled in Windows Server by default, but I like to see them, so I re-enable them.  It's super-simple.

#region Enable Disk Performance Counters in Task Manager
Start-Process -FilePath "C:\Windows\System32\diskperf.exe" -ArgumentList "-Y" -Wait
#endregion

 

Stage 4: Installing some Utilities

This is entirely for me.  There are a few utilities that I like on every server that I use regardless of version.  You can configure this to do it in whatever way you like.  Note that I no longer install 7-zip as part of this script because I'm deploying it via Group Policy.

#region Install 7Zip
# This can now be skipped because I'm deploying this via Group Policy
# Start-Process -FilePath "C:\Windows\System32\msiexec.exe" -ArgumentList "/i", "\\Path\To\Installer\7z1604-x64.msi", "/passive" -Wait
#endregion
#region Install Notepad++
# Install NotePad++ (current version)
# Still need to install the Plugins manually at this point, but this is a start
Start-Process -FilePath "\\Path\To\Installer\npp.latest.Installer.exe" -ArgumentList "/S" -Wait
#endregion
#region Setup UTILS Folder
# This contains the SysInternals and Unix Utils that I love so much.
$RemotePath = "\\Path\To\UTILS\"
$LocalPath  = "C:\UTILS\"
Start-Process -FilePath "C:\Windows\System32\robocopy.exe" -ArgumentList $RemotePath, $LocalPath, "/E", "/R:3", "/W:5", "/MT:16" -Wait
$MachinePathVariable = [Environment]::GetEnvironmentVariable("Path", "Machine")
if ( -not ( $MachinePathVariable -like '*$( $LocalPath )*' ) )
{
    $MachinePathVariable += ";$LocalPath;"
    $MachinePathVariable = $MachinePathVariable.Replace(";;", ";")
    Write-Host "Adding C:\UTILS to the Machine Path Variable" -ForegroundColor Yellow
    Write-Host "You must close and reopen any command prompt windows to have access to the new path"
    [Environment]::SetEnvironmentVariable("Path", $MachinePathVariable, "Machine")
}
else
{
    Write-Host "[$( $LocalPath )] already contained in machine environment variable 'Path'"
}
#endregion

 

Stage 5: Copying the IIS folders to a New Location

I don't want my web files on the C:\ Drive.  It's just something that I've gotten in the habit of doing from years of IT, so I move them using robocopy.  Then I need to re-apply some permissions that are stripped.

#region Copy the IIS Root to the Web Drive
# I can do this with Copy-Item, but I find that robocopy works better at keeping permissions
Start-Process -FilePath "robocopy.exe" -ArgumentList "C:\inetpub", ( Join-Path -Path $WebDrive -ChildPath "inetpub" ), "/E", "/R:3", "/W:5" -Wait
#endregion
#region Fix IIS temp permissions
$FolderPath = Join-Path -Path $WebDrive -ChildPath "inetpub\temp"
$CurrentACL = Get-Acl -Path $FolderPath
$AccessRule = New-Object -TypeName System.Security.AccessControl.FileSystemAccessRule -ArgumentList "NT AUTHORITY\NETWORK SERVICE", "FullControl", ( "ContainerInherit", "ObjectInherit" ), "None", "Allow"
$CurrentACL.SetAccessRule($AccessRule)
$CurrentACL | Set-Acl -Path $FolderPath
#endregion

 

Stage 6: Enable Deduplication (Optional)

I only want to deduplicate the log drive - I do this via this script.

#region Enable Deduplication on the Log Drive
Enable-DedupVolume -Volume ( $LogDrive.Replace("\", "") )
Set-DedupVolume -Volume ( $LogDrive.Replace("\", "") ) -MinimumFileAgeDays 0 -OptimizeInUseFiles -OptimizePartialFiles
#endregion

 

Stage 7: Remove Unnecessary IIS Websites and Application Pools

Orion will create its own website and application pool, so I don't need the default ones.  I destroy them with PowerShell.

#region Delete Unnecessary Web Stuff
Get-WebSite -Name "Default Web Site" | Remove-WebSite -Confirm:$false
Remove-WebAppPool -Name ".NET v2.0" -Confirm:$false
Remove-WebAppPool -Name ".NET v2.0 Classic" -Confirm:$false
Remove-WebAppPool -Name ".NET v4.5" -Confirm:$false
Remove-WebAppPool -Name ".NET v4.5 Classic" -Confirm:$false
Remove-WebAppPool -Name "Classic .NET AppPool" -Confirm:$false
Remove-WebAppPool -Name "DefaultAppPool" -Confirm:$false
#endregion

 

Step 8: Tweak the IIS Settings

This step is dangerous.  There's no other way to say this.  If you get the syntax wrong you can really screw up your system... this is also why I save a backup of the file before I make and changes.

#region Change IIS Application Host Settings
# XML Object that will be used for processing
$ConfigFile = New-Object -TypeName System.Xml.XmlDocument
# Change the Application Host settings
$ConfigFilePath = "C:\Windows\System32\inetsrv\config\applicationHost.config"
# Load the Configuration File
$ConfigFile.Load($ConfigFilePath)
# Save a backup if one doesn't already exist
if ( -not ( Test-Path -Path "$ConfigFilePath.orig" -ErrorAction SilentlyContinue ) )
{
    Write-Host "Making Backup of $ConfigFilePath with '.orig' extension added" -ForegroundColor Yellow
    $ConfigFile.Save("$ConfigFilePath.orig")
}
# change the settings (create if missing, update if existing)
$ConfigFile.configuration.'system.applicationHost'.log.centralBinaryLogFile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.log.centralW3CLogFile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("logFormat", "W3C" )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("logExtFileFlags", "Date, Time, ClientIP, UserName, SiteName, ComputerName, ServerIP, Method, UriStem, UriQuery, HttpStatus, Win32Status, BytesSent, BytesRecv, TimeTaken, ServerPort, UserAgent, Cookie, Referer, ProtocolVersion, Host, HttpSubStatus" )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("period", "Hourly")
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.traceFailedRequestsLogging.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\FailedReqLogFiles" ) )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("directory", [string]( Join-Path -Path $WebDrive -ChildPath "inetpub\temp\IIS Temporary Compressed File" ) )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("maxDiskSpaceUsage", "2048" )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("minFileSizeForComp", "5120" )
# Save the file
$ConfigFile.Save($ConfigFilePath)
Remove-Variable -Name ConfigFile -ErrorAction SilentlyContinue
#endregion

 

There's a lot going on here, so let me see if I can't explain it a little.

I'm accessing the IIS Application Host configuration file and making changes.  This file governs the entire IIS install, which is why I make a backup.

The changes are:

  • Change any log file location (lines 15 - 17, 21)
  • Define the log type (line 18)
  • Set the elements that I want in the logs (line 19)
  • Set the log roll-over period to hourly (line 20)
  • Set the location for temporary compressed files (line 22)
  • Set my compression settings (lines 23-24)

 

Stage 9: Tweaking the ASP.NET Configuration Settings

We're working with XML again, but this time it's for the ASP.NET configuration.  I use the same process as Stage 8, but the changes are different.  I take a backup again.

#region Change the ASP.NET Compilation Settings
# XML Object that will be used for processing
$ConfigFile = New-Object -TypeName System.Xml.XmlDocument
# Change the Compilation settings in the ASP.NET Web Config
$ConfigFilePath = "C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\web.config"
Write-Host "Editing [$ConfigFilePath]" -ForegroundColor Yellow
# Load the Configuration File
$ConfigFile.Load($ConfigFilePath)
# Save a backup if one doesn't already exist
if ( -not ( Test-Path -Path "$ConfigFilePath.orig" -ErrorAction SilentlyContinue ) )
{
    Write-Host "Making Backup of $ConfigFilePath with '.orig' extension added" -ForegroundColor Yellow
    $ConfigFile.Save("$ConfigFilePath.orig")
}
# change the settings (create if missing, update if existing)
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("tempDirectory", [string]( Join-Path -Path $WebDrive -ChildPath "inetpub\temp") )
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("maxConcurrentCompilations", "16")
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("optimizeCompilations", "true")
# Save the file
Write-Host "Saving [$ConfigFilePath]" -ForegroundColor Yellow
$ConfigFile.Save($ConfigFilePath)
Remove-Variable -Name ConfigFile -ErrorAction SilentlyContinue
#endregion

 

Again, there's a bunch going on here, but the big takeaway is that I'm changing the temporary location of the ASP.NET compilations to the drive where the rest of my web stuff lives and the number of simultaneous compilations. (lines 16-18)

 

Stage 10: Create NCM Roots

I hate having uploaded configuration files (from network devices) saved to the root drive.  This short script creates folders for them.

#region Create SFTP and TFTP Roots on the Web Drive
# Check for & Configure SFTP and TFTP Roots
$Roots = "SFTP_Root", "TFTP_Root"
ForEach ( $Root in $Roots )
{
    if ( -not ( Test-Path -Path ( Join-Path -Path $WebDrive -ChildPath $Root ) ) )
    {
        New-Item -Path ( Join-Path -Path $WebDrive -ChildPath $Root ) -ItemType Directory
    }
}
#endregion

 

Stage 11: Configure Folder Redirection

This is the weirdest thing that I do.  Let me see if I can explain.

 

My ultimate goal is to automate installation of the software itself.  The default directory for installation the software is C:\Program Files (x86)\SolarWinds\Orion (and a few others).  Since I don't really like installing any program (SolarWinds stuff included) on the O/S drive, this leaves me in a quandary.  I thought to myself, "Self, if this was running on *NIX, you could just do a symbolic link and be good."  Then I reminded myself, "Self, Windows has symbolic links available."  Then I just needed to tinker until I got things right.  After much annoyance, and rolling back to snapshots, this is what I got.

#region Folder Redirection
$Redirections = @()
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]1; SourcePath = "C:\ProgramData\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]2; SourcePath = "C:\ProgramData\SolarWindsAgentInstall"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]3; SourcePath = "C:\Program Files (x86)\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]4; SourcePath = "C:\Program Files (x86)\Common Files\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]5; SourcePath = "C:\ProgramData\SolarWinds\Logs"; TargetDrive = $LogDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]6; SourcePath = "C:\inetput\SolarWinds"; TargetDrive = $WebDrive } )
$Redirections | Add-Member -MemberType ScriptProperty -Name TargetPath -Value { $this.SourcePath.Replace("C:\", $this.TargetDrive ) } -Force
ForEach ( $Redirection in $Redirections | Sort-Object -Property Order )
{
    # Check to see if the target path exists - if not, create the target path
    if ( -not ( Test-Path -Path $Redirection.TargetPath -ErrorAction SilentlyContinue ) )
    {
        Write-Host "Creating Path for Redirection [$( $Redirection.TargetPath )]" -ForegroundColor Yellow
        New-Item -ItemType Directory -Path $Redirection.TargetPath | Out-Null
    }
    # Build the string to send to the command prompt
    $CommandString = "mklink /D /J `"$( $Redirection.SourcePath )`" `"$( $Redirection.TargetPath )`""
    Write-Host "Executing [$CommandString]... " -ForegroundColor Yellow -NoNewline
    # Execute it
    Start-Process -FilePath "cmd.exe" -ArgumentList "/C", $CommandString -Wait
    Write-Host "[COMPLETED]" -ForegroundColor Green
}
#endregion

The reason for the "Order" member in the Redirections object is because certain folders have to be built before others... IE: I can't build X:\ProgramData\SolarWinds\Logs before I build X:\ProgramData\SolarWinds.

 

When complete the folders look like this:

mklinks.png

Nice, right?

 

Stage 12: Pre-installing ODBC Drivers

I monitor many database server types with SolarWinds Server & Application Monitor.  They each require drivers  - I install them in advance (because I can).

#region Pre-Orion Install ODBC Drivers
#
# This is for any ODBC Drivers that I want to install to use with SAM
# You don't need to include any driver for Microsoft SQL Server - it will be done by the installer
# I have the drivers for MySQL and PostgreSQL in this share
#
# There is also a Post- share which includes the files that I want to install AFTER I install Orion.
$Drivers = Get-ChildItem -Path "\\Path\To\ODBC\Drivers\Pre\" -File
ForEach ( $Driver in $Drivers )
{
    if ( $Driver.Extension -eq ".exe" )
    {
        Write-Host "Executing $( $Driver.FullName )... " -ForegroundColor Yellow -NoNewline
        Start-Process -FilePath $Driver.FullName -Wait
        Write-Host "[COMPLETED]" -ForegroundColor Green
    }
    elseif ( $Driver.Extension -eq ".msi" )
    {
        # Install it using msiexec.exe
        Write-Host "Installing $( $Driver.FullName )... " -ForegroundColor Yellow -NoNewline
        Start-Process -FilePath "C:\Windows\System32\msiexec.exe" -ArgumentList "/i", "`"$( $Driver.FullName )`"", "/passive" -Wait
        Write-Host "[COMPLETED]" -ForegroundColor Green
    }
    else
    {
        Write-Host "Bork-Bork-Bork on $( $Driver.FullName )"
    }
}
#endregion

 

Running all of these with administrator privileges cuts this process down to 2 minutes and 13 seconds.  And over 77% of that is installing the Windows Features.

 

Execution time: 2:13

Time saved: over 45 minutes

 

This was originally published on my personal blog as Building my Orion Server [Scripting Edition] – Step 3 – Kevin's Ramblings

Since we launched the PerfStack™ feature in the beginning of 2017, we have seen a lot of interesting use cases from our customers. If you are unfamiliar with PerfStack, you can check out Drag & Drop Answers to Your Toughest IT Questions, where aLTeReGo outlines the basic functions of the feature. Those that are familiar have come to realize just how powerful the feature is for troubleshooting issues within their environment. Whether you're a system, network, virtualization, storage, or other IT administrator, being able to see metric data across the entire infrastructure stack is very valuable.

 

One of the common use cases I hear about lately, is visualizing synthetic web transaction metrics in the context of application performance metrics. For instance, lets say you have an intranet site and you need to monitor it for performance issues, including end users access and performance from multiple locations. With WPM and SAM, this is a reality. However, before PerfStack, you needed to browse multiple pages to compare metric data against each other.

 

In PerfStack, you can easily add all of the response times for WPM transactions, from multiple locations, to a single metric palette, and quickly visualize those metrics together. In the scenario of the intranet site mentioned above, you can see the response time average duration, from each of the four locations that are monitoring this particular transaction.

You can also easily add all of the transaction steps, for a particular transaction, to provide a more granular view of the response time of your web applications. All you have to do is click on the related entities icon for a given transaction. Then, add the related transaction step entities and subsequent response time metrics. This will allow you to quickly see which steps are contributing to the elevated response time for the related transaction.

save image

But what about the performance metrics of the application itself, and the infrastructure that hosts it? Those metrics are crucial to determining root cause of application issues. With Perfstack, it is easy to quickly add those metrics to the metric palette when using the related entities function. This does require a pre-requisite to be configured. When configuring your WPM transaction you will need to define related applications and nodes as shown below.

Once that is done, Orion does the rest. As you can see in AppStack, the relationship from the transaction to the application and the rest of the infrastructure stack is fully visible.

save image

This will allow you to add and remove all of the necessary entities and metrics to a PerfStack project, to complete your troubleshooting efforts. The more Orion Platform products you have, the more entities you will be able to collect metrics from and be able to visualize in PerfStack. Below you can see the multitude of data available through the relationships in Orion. When all of the needed metrics are added, you can save the PerfStack project to recall for future troubleshooting efforts.

save image

We have tried to make it easy to access the data needed to quickly troubleshoot any issues that arise in your environment. With the data at your finger tips you can drastically reduce the mean time to resolution for many issues. We all know that shorter mean time to resolution is important, because performance problems equate to unhappy end user. And when end users aren't happy...

iStock-135165692.jpg

Hopefully you are already reaping the benefit from the many improvements that were made in Network Performance Monitor 12.1, Server & Application Monitor 6.4, Storage Resource Monitor 6.4, Virtualization Manager 7.1, Netflow Traffic Analyzer 4.2.2, and Network Configuration Manager 7.6. If you haven't yet had a chance to upgrade to these releases, I encourage you to do so at your earliest convenience, as there are a ton of exciting new features that you're missing out on.

 

Something a few who already upgraded may have seen, is one or more deprecation notices within the installer. These may have included reference to older Windows operating systems or Microsoft SQL versions. Note that these deprecation notices will only appear when upgrading to any of the product versions listed above, provided you are installing on any of the Windows OS or SQL versions deprecated in those releases. But what does it mean when a feature or software dependency has been deprecated? Does this mean it's no longer supported, or those versions can't be used anymore?

 

Upgrade.png

 

Many customers throughout the years have requested advance notice whenever older operating systems and SQL database versions would no longer be supported in future versions of Orion, allowing them sufficient time to properly plan for those upgrades. Deprecation does not mean that those versions can't be used, or that they are no longer supported at the time the deprecation notice is posted. Rather, those deprecated versions continue to remain fully supported, but that future Orion product releases will likely no longer support them. As such, all customers affected by these deprecation notices should take this opportunity to begin planning their migrations if they wish to stay current with the latest releases. So what exactly was deprecated with the Q1'17 Orion product releases?

 

Windows Server 2008 R2

 

Released on October 22, 2009, Microsoft ended mainstream support for Windows Server 2008 R2 SP1 six years later on January 13, 2015. For customers, this means that while new security updates continue to be made available for the aging operating system, bug fixes for critical issues will require a separate purchase of an Extended Hotfix Support contract agreement; in addition to paying for each fix requested. Since so few of our customers have such agreements with Microsoft, the only recourse, often times, is an unplanned, out-of-cycle, operating system upgrade.

 

Microsoft routinely launches new operating system versions, with major releases on average every four years, and minor version releases approximately every two. As new server operating system versions are released, customer adoption begins immediately thereafter; sometimes even earlier, during Community Technical Preview, where some organizations place production workloads on the pre-released operating system. Unfortunately, in order to leverage the technological advances these later versions of Windows provide, it occasionally requires losing backwards compatibility support for some older versions along the way. Similar challenges occur also during QA testing whenever a new operating system is released. At some point it's simply not practical to thoroughly and exhaustively test every possible permutation of OS version, language, hotfix rollup or service pack. Eventually the compatibility matrix becomes so unwieldy that a choice between quality or compatibility must be made; and really that's not a choice at all.

 

 

SQL Server 2008 / 2008 R2

 

SQL Server 2008 was released on August 6, 2008, with SQL 2008 R2 being released just four months shy of two years later, on April 21, 2010. Seven years later, there have been tremendous advances in Microsoft's SQL server; from the introduction of new redundancy options, to technologies like OLTP and columnstore indexes, which provide tremendous performance improvements. Maintaining compatibility with older versions of Microsoft SQL precludes Orion from being able to leverage these and other advances made in later releases of Microsoft SQL Server, many of which have potential to tremendously accelerate the overall performance and scalability of future releases of the Orion platform.

 

If you happen to be running SQL Server 2008 or SQL 2008 R2 on Windows Server 2008 or 2008 R2, not to worry. There's no need to forklift your existing SQL server prior to upgrading to the next Orion release. In fact, you don't even need to upgrade the operating system of your SQL server, either. Microsoft has made the in-place upgrade process from SQL 2008/R2 to SQL 2014 extremely simple and straightforward. If your SQL server is running on Windows Server 2012 or later, then we recommend upgrading directly to SQL 2016 SP1 or beyond so you can limit the potential for additional future upgrades when/if support SQL 2012 is eventually deprecated.

 

Older Orion Version Support for Windows & SQL

 

Once new Orion product module versions are eventually released which no longer support running on Windows Server 2008, 2008 R2, or SQL 2008/R2, SolarWinds will continue to provide official support for those previous supported Orion module versions running on these older operating systems and SQL server versions. These changes only affect Orion module releases running Orion Core versions later than 2017.1. If you are already running the latest version of an Orion product module on Windows Server 2008/R2 or SQL 2008/R2 and have no ability to upgrade either of those in the near future, not to worry. Those product module versions will continue to be supported on those operating system and SQL versions for quite some time to come.

 

Monitoring vs Running

 

While the next release of Orion will no longer support running on Windows or SQL 2008/R2, support for monitoring systems which are running on these older versions of Windows and SQL will absolutely remain supported. This also includes systems where the Orion Agent is deployed. What that means is if you're using the Orion agent to monitor systems running on Windows Server 2008 or Windows Server 2008 R2, rest assured that support for monitoring those older systems with the Orion Agent remains fully intact in the next Orion release. The same is also true if you're monitoring Windows or SQL 2008/R2 agentlessly via WMI, SNMP, etc. You're next upgrade will not impact your ability to monitor these older operating systems or SQL versions in any way.

 

 

32Bit vs 64Bit

 

Support for installing evaluations on 32bit operating systems will also be dropped from all future releases of Orion product modules, allowing us to begin the migration of Orion codebase to 64bit. In doing this, it should improve the stability, scalability, and performance for larger Orion deployments. Once new product versions begin to be released without support for 32bit operating systems, users wishing to evaluate Orion based products on a 32bit operating system are encouraged to contact Sales to obtain earlier product versions which support 32bit operating systems.

 

 

.NET 4.6.2

 

Current Orion product module releases, such as Network Performance Monitor 12.1 and Server & Application Monitor 6.4, require a minimum version of .NET 4.5.1. All future Orion product module releases built atop Core versions later than 2017.1 will require a minimum version of Microsoft's .NET 4.6.2, which was released on 7/20/2016. This version of .NET is also fully compatible with all current shipping and supported versions of Orion product module releases, so there's no need to wait until your next Orion module upgrade to update to this newer version of .NET. Subsequently, .NET 4.7 was released on 5/2/2017 and is equally compatible with all existing Orion product module versions in the event you would prefer to upgrade directly to .NET 4.7 and bypass .NET 4.6.2 entirely.

 

It's important to note that Microsoft's .NET 4.6.2 has a hard dependency on Windows Update KB2919355, which was released in May 2014 for Windows Server 2012 R2 and Windows 8.1. This Windows Update dependency is rather sizable, coming in between 319-690MB. It also requires a reboot before .NET 4.6.2 can be installed and function properly. As a result, if you don't already have .NET 4.6.2 installed, you may want to plan for this upgrade during your next scheduled maintenance window to ensure your next Orion upgrade goes smoothly and as quick as possible.

 

Minimum Memory Requirements

 

With many of the changes referenced above, minimum system requirements have also needed adjustment as well. Windows Server 2012 and later operating systems utilize more memory than previous versions. Similarly, .NET 4.6 can also utilizes slightly more memory than .NET 4.5.1. As we move forward however, 64bit processes inherently use more memory than the same process compiled for 32bit. To ensure users have a pleasurable experience running the next version of Orion products, we will be increasing the absolute minimum memory requirements from 4GB to 6GB of RAM for future versions of Orion product modules. The recommended minimum memory requirement however, will remain at 8GB.

 

While most readers themselves today would never consider running and using a Windows 10 laptop on a day-in-day-out basis with just 4GB of RAM, those same people also likely wouldn't imagine running an enterprise grade server based monitoring solution on a system with similar such specs either. If you do, however, find yourself in an environment running Orion on 4GB of RAM today, an 8GB memory upgrade can typically be had for less than $100.00. This can be done before the next release of Orion product modules and will even likely provide a significant and immediate improvement to the overall performance of your Orion server.

 

 

How to Prepare

 

All items listed above can be completed prior to the release of the next Orion product module versions and will ensure your next upgrade goes off without a hitch. This posting is intended to provide anyone impacted by these changes with sufficient notice to plan any of these upgrades during their regularly scheduled maintenance periods, rather than during the upgrade process itself. In-place upgrades of SQL, as stated above, are a fairly simple and effective process to get upgraded quickly with the least possible amount of effort. If you're running Orion on Windows Server 2008 or 2008 R2, in-place OS upgrades are also feasible. If either of these are not feasible or desirable for any reason, you can migrate your Orion installation to a new server or migrate your Orion database to a new SQL server by following the steps outlined in our migration guide.

 

Other Options

 

If for any reason you find yourself running Orion on Windows Server 2008, Server 2008 R2, or on SQL 2008/R2 and unable to upgrade, don't fret. The current releases of Orion product modules will continue to remain fully supported for quite some time to come. There is absolutely zero requirement to be on the latest releases to receive technical support. In almost all cases, you can also utilize newly published content from Thwack's Content Exchange with previous releases, such as Application Templates, Universal Device Pollers, Reports, and NCM Configuration Change Templates. When you're ready to upgrade, we'll be here with plenty of exciting new features, enhancements and improvements.

 

 

Planning Beyond The Next Release

 

At any given time, Orion supports running on a minimum of three major versions of the Windows Operating System and SQL database server. When a new server OS or SQL version is released by Microsoft, SolarWinds makes every effort possible to support up to four OS and SQL versions for a minimum of one Orion product module release. If at any time you find yourself four releases behind the most current OS or SQL server version, you may want to begin planning an in-place upgrade or migration to a new server during your next regularly scheduled maintenance window to ensure your next Orion product module upgrade goes flawlessly.

 

For your reference, below is a snapshot of Windows Operating Systems and SQL Server versions which will be supported for the next release of Orion product modules. This list is not finalized and is still subject to change before release. However, nothing additional will be removed from this list, though there could be additional version support added after this posting.

 

Supported Operating System VersionsSupported Microsoft SQL Server Versions
Server 2012SQL 2012
Server 2012 R2SQL 2014
Server 2016SQL 2016

PoShSWI.png

On a previous post I showed off a little PowerShell script that I've written to build my SolarWinds Orion Servers.  That post left us with a freshly imaged Windows Server.  Like I said before, you can install the O/S however you like.  I used Windows Deployment Services because I'm comfortable with it.

 

I used Windows Server 2016 because this is my lab and...

giphy.gif

 

Now I've got a list of things that I want to do to this machine.

  1. Bring the Disks Online & Initialize
  2. Format Disks & Disable Indexing
  3. Configure the Page File
  4. Import the Certificate for SSL (Optional)

 

Because I'm me, I do this with PowerShell.  I'm going to go through each of stage one by one.

Stage 0: Declare Variables

I don't list this because this is something that I have in every script.  Before I even get into this, I need to define my variables.  For this it's disk names and drive letters.

#region Build Disk List
$DiskInfo  = @()
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]1; DriveLetter = "D"; Label = "Page File" } )
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]2; DriveLetter = "E"; Label = "Programs" } )
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]3; DriveLetter = "F"; Label = "Web" } )
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]4; DriveLetter = "G"; Label = "Logs" } )
#endregion

 

This looks simple, because it is.  It's simply a list of the disk numbers, the drive letter, and the labels that I want to use for the additional drives.

Stage 1: Bring the Disks Online & Initialize

Since I need to bring all offline disks online and choose a partition type (GPT), I can do this all at once.

#region Online & Enable RAW disks
Get-Disk | Where-Object { $_.OperationalStatus -eq "Offline" } | Set-Disk -IsOffline:$false
Get-Disk | Where-Object { $_.PartitionStyle -eq "RAW" } | ForEach-Object { Initialize-Disk -Number $_.Number -PartitionStyle GPT }
#endregion

 

Stage 2: Format Disks & Disable Indexing

This is where I really use the variables that are declared in Stage 0.  I do this with a ForEach Loop.

#region Create Partitions & Format
$FullFormat = $false # indicates a "quick" format
ForEach ( $Disk in $DiskInfo )
{
    # Create Partition and then Format it
    New-Partition -DiskNumber $Disk.DiskNumber -UseMaximumSize -DriveLetter $Disk.DriveLetter | Out-Null
    Format-Volume -DriveLetter $Disk.DriveLetter -FileSystem NTFS -AllocationUnitSize 64KB -Force -Confirm:$false -Full:$FullFormat -NewFileSystemLabel $Disk.Label
    
    # Disable Indexing via WMI
    $WmiVolume = Get-WmiObject -Query "SELECT * FROM Win32_Volume WHERE DriveLetter = '$( $Disk.DriveLetter ):'"
    $WmiVolume.IndexingEnabled = $false
    $WmiVolume.Put()
}
#endregion

 

We're getting closer!  Now we've got this:

DiskConfiguration.png

 

Stage 3: Configure the Page File

Getting the "best practices" for page files are crazy and all over the board.  Are you using flash storage?  Do you keep it on the O/S disk?  Do you declare a fixed size?  I decided to fall back on settings that I've used for years.

Page file does not live on the O/S disk.

Page file is statically set

Page file is RAM size + 257MB

In script form, this looks something like this:

#region Set Page Files
$CompSys = Get-WmiObject -Class Win32_ComputerSystem -EnableAllPrivileges
# is the system set to use system managed page files
if ( $CompSys.AutomaticManagedPagefile )
{
    # if so, turn it off
    $CompSys.AutomaticManagedPagefile = $false
    $CompSys.Put()
}
# Set the size to 16GB + 257MB (per Microsoft Recommendations) and move it to the D:\ Drive
# as a safety-net I also keep 8GB on the C:\ Drive.
$PageFileSettings = @()
$PageFileSettings += "c:\pagefile.sys 8192 8192"
$PageFileSettings += "d:\pagefile.sys 16641 16641"
Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\" -Name "pagingfiles" -Type multistring -Value $PageFileSettings
#endregion

 

Stage 4: Import the Certificate for SSL (Optional)

Since this is my lab, I get to do what I want.  (See above)  I include using SSL for Orion.  I have a wildcard certificate that I can use within my lab, so if I import it, then I can enable SSL when the configuration wizard runs.  This certificate is saved on a DFS share in my lab.  This is the script to import it.

#region Import Certificate
# Lastly, import my internal PKI Certificate for use with HTTPS
$CertName = "WildcardCert_demo.lab"
$CertPath = "\\Demo.Lab\Files\Data\Certificates\"
$PfxFile = Get-ChildItem -Path $CertPath -Filter "$CertName.pfx"
$PfxPass = ConvertTo-SecureString -String ( Get-ChildItem -Path $CertPath -Filter "$CertName.password.txt" | Get-Content -Raw ) -AsPlainText -Force
Import-PfxCertificate -FilePath $PfxFile.FullName -Password $PfxPass
#endregion

 

That's it.  Now the disks are setup, the page file is set, and the certificate is installed.  Next, I rename the computer, reboot, run Windows Updates, reboot, run Windows Updates, reboot, run Windows Updates... (you see where this is going, right?)

 

Execution Time: 16 seconds

Time saved: at least 15 minutes.

 

There's still some prep work I can do via scripting and I'll provide that next.

 

This is a cross-post from my personal blog post Building my Orion Server [Scripting Edition] – Step 2 – Kevin's Ramblings

As the Product Manager for Online Demos, I need to install the SolarWinds Orion platform frequently... sometimes as much as 4 times per month.  This can get tiresome, but I've gotten some assistance from PowerShell, the community, and some published help documents.

 

I've thought about scripting these out for a while now and I came up with a list of things to do.

  1. Build the virtual machines
  2. Pre-configure the virtual machine's disks
  3. Prep the machine for installation
  4. Install the software (silently)
  5. Finalize the installation (silently)

This post is the first step in this multi-step process - Building your virtual machine.

 

Now dependent on your hypervisor there are two different paths to follow: Hyper-V or VMware.  In my lab, I've got both because I try to be as agnostic as possible.  It's now time to start building the script.  I'm going to use PowerShell.

 

Scripting Preference: PowerShell

Preference Reasoning: I know it and I'm comfortable using it.

 

Hyper-V vs. VMware

 

Each Hypervisor has different requirements when building a virtual machine, but some are the same for each - specifically the number & size of disks, the CPU count and the maximum memory.  The big deviation comes from the way that each hypervisor handles memory & CPU reservations.

 

Hyper-V handles CPU reservation as a percentage of total whereas VMware handles is via the number of MHz.  I've elected to keep the reservation as a percentage.  It seemed easier to keep straight (in my head) and only required minor tweaks to the script.

 

Step 1 - Variable Declaration

  • VM Name [string] - both
  • Memory (Max Memory for VM) [integer] - both
  • CPU Count (number of CPUs) [integer] - both
  • CPU Reservation (percentage) [integer] - both
  • Disk Letters and Sizes - both
  • Memory at Boot (Memory allocated at boot) [integer] - Hyper-V
  • Memory (Minimum) [integer] - Hyper-V
  • Use Dynamic Disks [Boolean] - Hyper-V
  • VLAN (VLAN ID to use for Network Adapter) [integer] - Hyper-V
  • vCenter Name [string] - VMware
  • ESX Host [string] - VMware
  • Disk Format ("thin", "thick", etc.) [string] - VMware
  • VLAN (VLAN name to use for Network Adapter) [string] - VMware
  • Guest OS (identify the Operating System) [string] - VMware

 

Step 2 - Build the VM

Building the VM is an easy step that you actually only takes 1 line using the "New-VM" command (regardless of Hypervisor).  The syntax and parameters change depending on the hypervisor, but otherwise we just build the shell.  In Hyper-V, I do this in two commands and in VMware I do it in one.

 

Step 3 - Assign Reservations

This is a trickier step in VMware because it uses MHz and not percentages.  For that I need to know what the MHz of the processor in the host is running.  Thankfully, this can be calculated pretty easily.  Then I just set the CPU & Memory reservations based around each hypervisor's requirements

 

Step 4 - Assign VLAN

Hyper-V uses the VLAN ID (integer) and VMware uses the VLAN Name (string).  It's nearly the same command with just a different parameter.

 

Step 5 - Congratulate yourself.

Hyper-VVMware
OrionServer_Hyper-V.pngOrionServer_VMware.png

 

Execution Time: 9 seconds on either architecture.

Time saved: at least 10 minutes.

 

The full script is below.

 

#region Variable Declaration
$VMName       = "OrionServer" # Virtual Name
$Architecture = "Hyper-V"     # (or "VMware")
# Global Variable Declaration
$CPUCount     = 4             # Number of CPU's to give the VM
$CPUReserve   = 50            # Percentage of CPU's being reserved
$RAMMax       = 16GB          # Maximum Memory
# Sizes and count of the disks
$VHDSizes = [ordered]@{ "C" =  40GB; # Boot
                        "D" =  30GB; # Page
                        "E" =  40GB; # Programs
                        "F" =  10GB; # Web
                        "G" =  10GB; # Logs
                       } 
#endregion
# Architecture-specific commands
if ( $Architecture -eq "Hyper-V" )
{
    $RAMBoot      = 8GB           # Startup Memory
    $RAMMin       = 8GB           # Minimum Memory (should be the same as RAMBoot)
    $DynamicDisks = $true         # Use Dynamic Disks?
    $Vlan         = 300           # VLAN assignment for the Network Adapter
    # Assume that we want to make all the VHDs in the default location for this server.
    $VHDRoot = Get-Item -Path ( Get-VMHost | Select-Object -ExpandProperty VirtualHardDiskPath )
    # Convert the hash table of disks into PowerShell Objects (easier to work with)
    $VHDs = $VHDSizes.Keys | ForEach-Object { New-Object -TypeName PSObject -Property ( [ordered]@{ "ServerName" = $VMName; "Drive" = $_ ; "SizeBytes" = $VHDSizes[$_] } ) }
    # Extend this object with the name that we'll want to use for the VHD
    # My naming scheme is [MACHINENAME]_[DriveLetter].vhdx - adjust to match your own.
    $VHDs | Add-Member -MemberType ScriptProperty -Name VHDPath -Value { Join-Path -Path $VHDRoot -ChildPath ( $this.ServerName + "_" + $this.Drive + ".vhdx" ) } -Force
    # Create the VHDs
    $VHDs | ForEach-Object { 
        if ( -not ( Test-Path -Path $_.VHDPath -ErrorAction SilentlyContinue ) )
        {
            Write-Verbose -Message "Creating VHD at $( $_.VHDPath ) with size of $( $_.SizeBytes / 1GB ) GB"
            New-VHD -Path $_.VHDPath -SizeBytes $_.SizeBytes -Dynamic:$DynamicDisks | Out-Null
        }
        else
        {
            Write-Host "VHD: $( $_.VHDPath ) already exists!" -ForegroundColor Red
        }
    }
    #region Import the Hyper-V Module & Remove the VMware Module (if enabled)
    # This is done because there are collisions in the names of functions
    if ( Get-Module -Name "VMware.PowerCLI" -ErrorAction SilentlyContinue )
    {
        Remove-Module VMware.PowerCLI -Confirm:$false -Force
    }
    if ( -not ( Get-Module -Name "Hyper-V" -ErrorAction SilentlyContinue ) )
    {
        Import-Module -Name "Hyper-V" -Force
    }
    #endregion Import the VMware Module & Remove the Hyper-V Module
    # Step 1 - Create the VM itself (shell) with no Hard Drives to Start
    $VM = New-VM -Name $VMName -MemoryStartupBytes $RAMBoot -SwitchName ( Get-VMSwitch | Select-Object -First 1 -ExpandProperty Name ) -NoVHD -Generation 2 -BootDevice NetworkAdapter
    # Step 2 - Bump the CPU Count
    $VM | Set-VMProcessor -Count $CPUCount -Reserve $CPUReserve
    # Step 3 - Set the Memory for the VM
    $VM | Set-VMMemory -DynamicMemoryEnabled:$true -StartupBytes $RAMBoot -MinimumBytes $RAMMin -MaximumBytes $RAMMax
    # Step 4 - Set the VLAN for the Network device
    $VM | Get-VMNetworkAdapter | Set-VMNetworkAdapterVlan -Access -VlanId $Vlan
    # Step 5 - Add Each of the VHDs
    $VHDs | ForEach-Object { $VM | Add-VMHardDiskDrive -Path $_.VHDPath }
}
elseif ( $Architecture -eq "VMware" )
{
    #region Import the VMware Module & Remove the Hyper-V Module (if enabled)
    # This is done because there are collisions in the names of functions
    if ( Get-Module -Name "Hyper-V" -ErrorAction SilentlyContinue )
    {
        Remove-Module -Name "Hyper-V" -Confirm:$false -Force
    }
    if ( -not ( Get-Module -Name "VMware.PowerCLI" -ErrorAction SilentlyContinue ) )
    {
        Import-Module VMware.PowerCLI -Force
    }
    #endregion Import the VMware Module & Remove the Hyper-V Module
    $vCenterServer = "vCenter.Demo.Lab"
    $DiskFormat = "Thin" # or "Thick" or "EagerZeroedThick"
    $VlanName = "External - VLAN 300"
    $GuestOS = "windows9Server64Guest" # OS Identifer of the Machine
    #region Connect to vCenter server via Trusted Windows Credentials
    if ( -not ( $global:DefaultVIServer ) )
    {
        Connect-VIServer -Server $vCenterServer
    }
    #endregion Connect to vCenter server via Trusted Windows Credentials
    # Find the host with the most free MHz or specify one by using:
    # $VMHost = Get-VMHost -Name "ESX Host Name"
    $VmHost = Get-VMHost | Sort-Object -Property @{ Expression = { $_.CpuTotalMhz - $_.CpuUsageMhz } } -Descending | Select-Object -First 1
    # Calculate the MHz for each processor on the host
    $MhzPerCpu = [math]::Floor( $VMHost.CpuTotalMhz / $VMHost.NumCpu )
    # Convert the Disk Sizes to a list of numbers (for New-VM Command)
    $DiskSizes = $VHDSizes.Keys | Sort-Object | ForEach-Object { $VHDSizes[$_] / 1GB }
    # Create the VM
    $VM = New-VM -Name $VMName -ResourcePool $VMHost -DiskGB $DiskSizes -MemoryGB ( $RAMMax / 1GB ) -DiskStorageFormat $DiskFormat -GuestId $GuestOS -NumCpu $CPUCount
    # Setup minimum resources
    # CPU is Number of CPUs * Reservation (as percentage) * MHz per Processor
    $VM | Get-VMResourceConfiguration | Set-VMResourceConfiguration -CpuReservationMhz ( $CPUCount * ( $CPUReserve / 100 ) * $MhzPerCpu ) -MemReservationGB ( $RAMMax / 2GB )
    # Set my VLAN
    $VM | Get-NetworkAdapter | Set-NetworkAdapter -NetworkName $VlanName -Confirm:$false
}
else
{
    Write-Error -Message "Neither Hyper-V or VMware defined as `$Architecture"
}

 

Next step is to install the operating system.  I do this with Windows Deployment Services.  Your mileage may vary.

 

After that, we need to configure the machine itself.  That'll be the next post.

 

About this post:

This post is a combination of two posts on my personal blog: Building my Orion Server [VMware Scripting Edition] – Step 1 & Building my Orion Server [Hyper-V Scripting Edition] – Step 1.

We’re happy to announce the release of SolarWinds® Port Scanner, a standalone free tool that delivers a list of open, closed, and filtered ports for each scanned IP address.

Designed for network administrators of businesses of all sizes, Port Scanner gives your team insight into TCP and UDP ports statuses, can resolve hostnames and MAC addresses, and detects operating systems. Even more importantly, it enables users to run the scan from a CLI and exports the results into a file.

What else does Port Scanner do?

  • Supports threading using advanced adaptive timing behavior based on network status monitoring and feedback mechanisms, in order to shorten the scan run time
  • Allows you to save scan configurations into a scan profile so you can run the same scan again without redoing previous configurations
  • Resolves hostnames using default local machine DNS settings, with the alternative option to define a DNS server of choice
  • Exports to XML, XLSX, and CSV file formats

For more detailed information about Port Scanner, please see the SolarWinds Port Scanner Quick Reference guide here on THWACK®: https://thwack.solarwinds.com/docs/DOC-190862

 

Download SolarWinds Port Scanner: http://www.solarwinds.com/port-scanner

We are excited to share, that we've reached GA for Web Help Desk (WHD) 12.5.1

 

This service release includes:

 

Improved application support

  • Microsoft® Office 365
  • Microsoft Exchange Server 2016
  • Microsoft Windows Server® 2016

Improved protocol and port management

  • The Server Options setting provides a user interface to manage the HTTP and HTTPS ports from the Web Help Desk Console. You can enable the listening port to listen for HTTP or HTTPS requests, configure the listening port numbers, and create a custom port for generated URL links.

Improved keystore management

  • The Server Options setting also includes Keystore Options. Use this option to create a new Java Keystore (JKS) or a Public-Key Cryptography Standards #12 (PKCS12) KeyStore to store your SSL certificates.

Improved SSL certificate management

  • The Certificates setting allows you to view the SSL certificates in the KeyStore that provide a secure connection between a resource and the Web Help Desk Console.

Improved APNS management

  • The Certificates setting also allows you to monitor your Apple Push Certification Services (APNS) Certificate expiration date and upload a new certificate prior to the expiration date.

Brand new look and feel

  • The new user interface offers clean visual design that eliminates visual noise to help you focus on what is important.

Other improvements

  • Apache Tomcat updated to 7.0.77

 

We encourage all customers to upgrade to this latest release which is available within your customer portal.

Thank you!

SolarWinds Team

With the key application I support, our production environment is spread across a Citrix Farm of 24 servers connected to an AppServer Farm of 6 Servers all with load balanced Web and App Services.  So the question is when is my application down?  If a Citrix server is off line?  If a Web or App Service is down on one AppServer? We have to assess the criticality of our application status.

 

We have determined it 1 service is down, it does not affect the availability of the application or even the experience of our user.  Truth be told, the application can support all users even if only one AppServer is running the Document Service for example.  Of course, in that scenario we have not redundancy and no safety net.

 

So I created a script that allows us to look at a particular service and based on the number of  instances running, we determine a criticality.

 

Check out Check Multiple Nodes for a Running Service, Then Show Application Status Based on Number of Instances

 

Within this script you can identify a list of servers to poll, a minimum critical value and return either up, warning or critical for the application component based on the number of instances.

What is NIST 800-171 and how does it differ from NIST 800-53?

 

NIST SP 800-171 – "Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations" provides guidance for protecting the confidentiality of Controlled Unclassified Information (CUI) residing in non-federal information systems and organizations. The publication is focused on information that is shared by federal agencies with a non-federal entity. If you are a contractor or sub-contractor to governmental agencies whereby CUI resides on your information systems, NIST-800-171 will impact you.

 

Cybercriminals regularly target federal data such as healthcare records, Social Security numbers, and more. It is vital that this information is protected when residing on non-federal information systems. NIST 800-171 has an implementation deadline of 12/31/2017, which has contractors scrambling.

 

Many of the controls contained within NIST 800-171 are based on NIST 800-53, but they are tailored to protect CUI in nonfederal information systems. There are 14 “families” of controls within NIST 800-171, but before we delve into those, we should probably discuss Controller Unclassified Information (CUI) and what it is.

 

There are several categories and subcategories of CUI, which you can be view here. You may be familiar with Sensitive but Unclassified (SBU) information—there were various categories that fell under SBU—but CUI replaces SBU and all its sub-categories. CUI is information which is not classified but in the federal government’s best interest to protect.

 

NIST 800-171 Requirements

As we mentioned above, there are 14 classes of controls within NIST 800-171. These are:

 

  • Access Control
  • Awareness and Training
  • Audit and Accountability
  • Configuration Management
  • Identification and Authentication
  • Incident Response
  • Maintenance
  • Media Protection
  • Personnel Security
  • Physical Protection
  • Risk Assessment
  • Security Assessment
  • System and Communications Protection
  • System and Information Integrity

 

We will now delve further into each of these categories and discuss the basic and derived security requirements where SolarWinds® products can help. Basic security requirements are high-level requirements, whereas derived requirements are the controls you need to put in place to meet the high-level objective of the basic requirements.

 

3.1 Access Control

3.1.1 – Limit information system access to authorized users, processes acting on behalf of authorized users, or devices (including other information systems).

 

3.1.2 – Limit information system access to the types of transactions and functions that authorized users are permitted to execute.

 

This category limits access to systems to authorized users only and limits user activity to authorized functions only. There are a few areas within Access Control where our products can help, but many of these controls are implemented at the policy or device levels.

 

3.1.5 – Employ the principle of least privilege, including for specific security functions and privileged accounts.

SolarWinds Log & Event Manger (LEM) can audit deviations from least privilege—e.g., unauthorized file access and unexpected system access. Auditing can be done in real-time or via reports. LEM can also monitor Microsoft® Active Directory® (AD) for unexpected escalated privileges being assigned to a user.

 

3.1.6 – Use of non-privileged accounts when accessing non-security functions.

SolarWinds LEM can monitor privileged account usage and audit the use of privileged accounts for non-security functions.

 

3.1.7 – Prevent non-privileged users from executing privileged functions and audit the execution of such functions.

Execution of privileged functions such as creating and modifying registry keys and editing system files can be audited in real-time or via reports in LEM. On the network device side, SolarWinds Network Configuration Manager (NCM) includes a change approval system which helps ensure that non-privileged users cannot execute privileged functions without approval from a privileged user.

 

3.1.8 – Limit unsuccessful logon attempts.

The number of logon attempts before lockout are generally set at the domain/system policy level, but LEM can confirm if the lockout policy is being enforced via reports/nDepth. LEM can also be used to report on unsuccessful logon attempts, as well as automatically lock a user account via the Active Response feature.

 

3.1.12 – Monitor and control remote access sessions.

LEM can monitor and report on remote logons. Correlation rules can be configured to alert and respond to unexpected remote access (e.g., access outside normal business hours). SolarWinds NCM can audit how remote access is configured on your network device, identify any configuration violations, and remediate accordingly.

 

3.1.21 – Limit use of organizational portable storage devices on external information systems.

LEM can audit and restrict usage of portable storage devices with its USB Defender feature.

 

3.2 Awareness and Training

3.2.1 – Ensure that managers, systems administrators, and users of organizational information systems are made aware of the security risks associated with their activities and of the applicable policies, standards, and procedures related to the security of organizational information systems.

 

3.2.2 – Ensure that organizational personnel are adequately trained to carry out their assigned information security-related duties and responsibilities.

 

This section relates to user awareness training, especially around information security. Users should be aware of policies, procedures, and attack vectors such as phishing, malicious email attachments, and social engineering. Unfortunately, SolarWinds can’t provide information security training your users—we would if we could!

 

3.3 Audit and Accountability

3.3.1 – Create, protect, and retain information system audit records to the extent needed to enable the monitoring, analysis, investigation, and reporting of unlawful, unauthorized, or inappropriate information system activity.

 

3.3.2 – Ensure that the actions of individual information system users can be uniquely traced to those users so they can be held accountable for their actions.

 

This set of controls helps to ensure that audit logs are in place and that they are monitored to identify authorized or suspicious activity. These controls relate to the data you want LEM to ingest and how those logs are protected and retained. LEM can help satisfy some of the controls in this section directly. NCM also includes some powerful features which which can assist with the Audit and Accountability controls, including real-time change detection, configuration change reports, and a change approval system. 

 

3.3.3 – Review and update audited events.

LEM helps with the review of audited events, provided the appropriate logs are sent to LEM.

 

3.3.4 – Alert in the event of an audit process failure.

LEM can generate alerts when agents go offline or the log storage database is running low on space. LEM can also alert on behalf of systems when audit logs are cleared—e.g., if a user clears the Windows® event log.

 

3.3.5 – Correlate audit review, analysis and reporting processes for investigation and response to indications of inappropriate, suspicious, or unusual activity.

LEM’s correlation engine and reporting can assist with audit log reviews and help ensure that administrators are alerted to indications of inappropriate, suspicious, or unusual activity.

 

3.3.6 – Provide audit reduction and report generation to support on-demand analysis and reporting.

Audit logs can generate a huge amount of information. LEM can analyze event logs and generate scheduled or on-demand reports to assist with analysis. However, you will need to ensure that your audit policies and logging levels are appropriately configured.

 

3.3.7 – Provide an information system capability that compares and synchronizes internal system clocks with an authoritative source to generate time stamps for audit records.

LEM satisfies this requirement through Network Time Protocol server synchronization. LEM also includes a predefined correlation rule that monitors for time synchronization failures.

 

3.3.8 – Protect audit information and audit tools from unauthorized access, modification, and deletion.

LEM helps satisfy this requirement through the various mechanisms outlined in this post: Log & Event Manager Appliance Security and Data Protection.

 

3.3.9 – Limit management of audit functionality to a subset of privileged users.

As per the response to 3.3.8, LEM provides role-based access control, which limits access and functionality to a subset of privileged users.

 

3.4 Configuration Management

3.4.1 Establish and maintain baseline configurations and inventories of organizational information systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.

 

3.4.2 Establish and enforce security configuration settings for information technology products employed in organizational information systems.

 

Minimum acceptable configurations must be maintained and change management controls must be in place. Inventory comes into play here, too. NCM will have the biggest impact here (on the network device side), thanks to its ability to establish baseline configurations and report on violations. LEM and SolarWinds Patch Manager can also play roles within this set of controls.

 

3.4.3 – Track, review, approve/disapprove, and audit changes to information systems.

NCM’s real-time change detection, change approval management and tracking reports can be used to detect, validate, and document changes to network devices. LEM can monitor and audit changes to information systems, provided the appropriate logs are sent to LEM.

 

3.4.8 – Apply deny-by-exception (blacklist) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.

LEM can monitor for the use of unauthorized software. Thanks to Active Response, you can configure LEM to automatically kill nonessential programs and services.

 

3.4.9 – Control and monitor user-installed software.

LEM can audit software installations and alert accordingly. Patch Manager can inventory machines on your network and report on the software and patches installed.

 

3.5 Identification and Authentication

3.5.1 Identify information system users, processes acting on behalf of users, or devices.

 

3.5.2 Authenticate (or verify) the identities of those users, processes, or devices, as a prerequisite to allowing access to organizational information systems.

 

This section includes controls such as using multifactor authentication, enforcing password complexity and storing/transmitting passwords in an encrypted format. SolarWinds does not have products to support these requirements.

 

3.6 Incident Response

3.6.1 Establish an operational incident-handling capability for organizational information systems that includes adequate preparation, detection, analysis, containment, recovery, and user response activities.

 

3.6.2 Track, document, and report incidents to appropriate officials and/or authorities both internal and external to the organization.

 

There is only one derived security requirement within the Incident Response section, namely:

3.6.3 Test the organizational incident response capability.

 

LEM can play a role in the incident generation and the subsequent investigation. LEM can generate an incident based on a defined correlation trigger and respond to an incident via the Active Responses. Reports can be produced based on detected incidents.

 

3.7 Maintenance  

3.7.1 Perform maintenance on organizational information systems.

 

3.7.2 Provide effective controls on the tools, techniques, mechanisms, and personnel used to conduct information system maintenance.

 

SolarWinds isn’t relevant to most of the requirements in this section. Controls contained within the maintenance category include: ensuring equipment remove for off-site maintenance is sanitized of CUI, checking media for malicious code and requiring multifactor authentication for nonlocal maintenance sessions.

 

LEM can assist with the 3.7.6 requirement that states “Supervise the maintenance activities of maintenance personnel without required access authorization.” Provided the appropriate logs are being generated and sent to LEM, reports can be used to audit the activity performed by maintenance personnel. NCM also comes into play, allowing you to compare configurations before and after maintenance windows.

 

3.8 Media Protection

3.8.1 Protect (i.e., physically control and securely store) information system media containing CUI, both paper and digital.

 

3.8.2 Limit access to CUI on information system media to authorized users.

 

3.8.3 Sanitize or destroy information system media containing CUI before disposal or release for reuse.

 

Most of the controls within the Media Protection systems are not applicable to SolarWinds products. However, LEM can assist with one control.

 

3.8.7 – Control the use of removable media on information system components. 

LEM’s USB Defender feature can monitor for usage of USB removable media and can automatically detach USB devices when unauthorized usage is detected.

 

3.9 Personnel Security

3.9.1 Screen individuals prior to authorizing access to information systems containing CUI.

 

3.9.2 Ensure that CUI and information systems containing CUI are protected during and after personnel actions such as terminations and transfers.

 

There are no derived security requirements within this section. LEM can assist with 3.9.2 by auditing usage of credentials of terminated personnel, validating that accounts are disabled in a timely manner, and validating group/permission changes after a personnel transfer.

 

3.10 Physical Protection

3.10.1 Limit physical access to organizational information systems, equipment, and the respective operating environments to authorized individuals.

 

3.10.2 Protect and monitor the physical facility and support infrastructure for those information systems.

 

SolarWinds cannot assist with any of the physical security controls contained within this section.

 

3.11 Risk Assessment

3.11.1 Periodically assess the risk to organizational operations (including mission, functions, image, or reputation), organizational assets, and individuals, resulting from the operation of organizational information systems and the associated processing, storage, or transmission of CUI.

 

3.11.2 Vulnerable software poses a great risk to every organization. These vulnerabilities should be identified and remediated—that is exactly what the controls within this section aim to do.

 

Risk Assessment involves lots of policies and procedures; however, Patch Manager can be leveraged to keep systems up to date with the latest security patches.

 

3.11.2 – Scan for vulnerabilities in the information system and applications periodically and when new vulnerabilities affecting the system are identified.

Patch Manager cannot perform vulnerability scans, but it can be used to identify missing application patches on your Windows machines. NCM identifies risks to network security based on device configuration. NCM also accesses the NIST National Vulnerability Database to get updates on potential emerging vulnerabilities in Cisco® ASA and IOS® based devices.

 

3.11.3 – Remediate vulnerabilities in accordance with assessments of risk.

Patch Manager can remediate software vulnerabilities on your Windows machines via Microsoft® and third-party updates. Patch Manager can be used to install updates on a scheduled basis or on demand. On the network device side, NCM performs Cisco IOS® firmware upgrades to potentially mitigate identified vulnerabilities.

 

3.12 Security Assessment

3.12.1 – Periodically assess the security controls in organizational information systems to determine if the controls are effective in their application.

 

3.12.2 – Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational information systems.

 

3.12.3 – Monitor information system security controls on an ongoing basis to ensure the continued effectiveness of the controls.

We can help with the monitoring of the Security Assessment controls via modules such as Network Configuration Manager (NCM) and Log and Event Manager (LEM). When monitoring security controls and performing assessments, network configuration should not be overlooked. NCM enables you to standardize network device configuration, detect out-of-process changes, audit configurations and correct violations. LEM can monitor event logs relating to information system security and perform correlation, alerting, reporting and more. LEM can monitor event logs relating to information system security and perform correlation, alerting, reporting, and more. SolarWinds provides several other modules that support monitoring the health and performance of your information systems and networks.

 

3.13 System and Communications Protection

3.13.1 – Monitor, control, and protect organizational communications (i.e., information transmitted or received by organizational information systems) at the external boundaries and key internal boundaries of the information systems.

 

3.13.2 – Employ architectural designs, software development techniques, and systems engineering principles that promote effective information security within organizational information systems.

 

Many of the controls in this section involve protecting confidentiality of CUI at rest, ensuring encryption is used and keys are appropriately managed, and networks are segmented. However, the basic security requirement 3.13.1 is certainly an area where SolarWinds can assist. This requirement involves monitoring (and controlling/protecting) communication at external and internal boundaries. LEM can collect logs from your network devices and alert to any suspicious traffic. SolarWinds NetFlow Traffic Analyzer (NTA) can also be used to monitor traffic flows for specific protocols, applications, domain names, ports, and more.

 

3.13.6 Deny network communications traffic by default and allow network communications traffic by exception (i.e., deny all, permit by exception).

LEM can ingest traffic from network devices that provides auditing to validate that traffic is being appropriated denied/permitted. NPM and NTA can also be used to monitor traffic. NCM can provide configuration reports to help ensure that your access control lists are compliant with “deny all and permit by exception,” as well as providing the ability to execute scripts to make ACL changes en masse.

 

3.13.14 – Control and monitor the use of VoIP technologies.

NPM/NTA and SolarWinds VoIP & Network Quality Manager can be used to monitor VoIP traffic/ports.

 

3.14 System and Information Integrity

3.14.1 – Identify, report, and correct information and information system flaws in a timely manner.

 

3.14.2 – Provide protection from malicious code at appropriate locations within organizational information systems.

 

3.14.3 – Monitor information system security alerts and advisories and take appropriate actions in response.

 

The controls within this section set out to ensure that the information system or the information within the system has been compromised. Patch Manager and LEM can play a role in system/information integrity.

 

3.14.4 Update malicious code protection mechanisms when new releases are available.

Essentially, this control requires you to patch your systems. Patch Manager provides the ability to patch your systems with Microsoft and third-party updates on a scheduled or ad-hoc basis. Custom packages can also be created to update products that are not included in our catalog.

 

3.14.5 Perform periodic scans of the information system and real-time scans of files from external sources as files are downloaded, opened or executed.

This control ensures that you have an anti-virus tool in place to scan for malicious files. LEM can receive alerts from a wide range anti-virus/malware solutions to correlate, alert, and respond to identified threats.

 

3.14.6 Monitor the information system including inbound and outbound communications traffic, to detect attacks and indicators of potential attacks.

This security control is very well suited to LEM—the correlation engine can monitor logs for any suspicious or malicious behavior. LEM can be used to monitor inbound/outbound traffic, although NPM/NTA could be used to detect unusual traffic patterns.

 

3.14.7 Identify unauthorized use of the information system.

LEM can monitor for unauthorized activity. User-defined groups come into play here which can create blacklists/whitelists of authorized users and events. 

 

Still with me? As you can see, there is a substantial number of requirements within the 14 sets of controls, but when implemented correctly, the framework can go a long way to ensure the confidentiality, integrity, and availability of Controlled Unclassified Information and your information system as a whole. The SolarWinds products I’ve mentioned above all include a wide variety of out-of-the-box content such as rules, alerts, and reports that can help with the NIST 800-171 requirements.

 

I hope this blog post has helped you with untangling some of the NIST-800-171 requirements and how you can leverage SolarWinds products to help. If you’ve got any questions or feedback, please feel free to comment below. 

Version 2.0 is a new major release of GNS3, which brings significant architectural changes and new features to the growing community of over 900,000 registered network professionals. Since inception of the project GNS3 has made over 79 releases and been downloaded over 13,693,000 times.

 

GNS3 started as only a desktop application from the first version up to version 0.8.3. With the more recent 1.x versions, GNS3 has grown to now allow for the use of remote servers. Within version 2.0, multiple clients can control GNS3 at the same time, also all the “application intelligence” has been moved to the GNS3 server.

 

What does it mean?

 

  • Third parties can make applications controlling GNS3. This will also allow individuals to easily configure and spin up a virtual network with pre-established templates

  • Multiple users can be connected to the same project and see each other modifications in real time, allowing individuals in remote location to work and collaboration on projects in the GNS3 virtual network environment

  • No need to duplicate your settings on different computers if they connect to the same central server.

  • Easier to contribute to GNS3, the separation between the graphical user interface and the server/backend is a lot clearer

  • GNS3 now supports the following vendor devices: Arista vEOS, Cumulus VX, Brocade Virtual ADX, Checkpoint GAiA, A10 vThunder, Alcatel 7750, NetScaler VPX, F5 BIG-IP LTM VE, MikroTik CHR, Juniper vMX and more....


All the complexity of connecting multiple emulators has been abstracted in what we call the controller (part of GNS3 server). From a user point of view, it means that it is possible to start a packet capture on any link, connect anything to a cloud etc. Finally, by using the NAT object in GNS3, connections to Internet work out of the box (note this is only available with the GNS3 VM or on Linux with libvirt installed).

 

Get started with GNS3 v2.0 now!

 

NEW FEATURES DETAILS

 

Save as you go

Your projects are automatically saved as you make changes to them, there is no need to press any save button anymore. An additional benefit is this will avoid synchronization issues between the emulators’ virtual disks and projects.

 

Multiple users can be connected to the same project

Multiple user can be connected to the same project and see each other changes in real time and collaborate. If you open a console to a router you will see the commands send by other users.

 

Smart packet capture

Now starting a packet capture is just as easy as clicking on a link and asking for new capture. GNS3 will guess the pick the best interface where to capture from. The packet capture dialog has also been redesigned to allow changing the name of the output file or to prevent automatically starting Wireshark:

NEW API

 

Developers can find out how to control GNS3 using an API here: http://api.gns3.net/en/2.0/ Thanks to our controller, it is no longer required to deal with multiple GNS3 servers since most of the information is available in the API. All the visual objects are exposed as SVG.

 

Key Videos on GNS3 v2.0

 

This video explains and demonstrates how to upgrade both the GNS3 Windows GUI and the GNS3 VM to version 2.0.0 of GNS3. This video uses Windows 10 for the demonstrations.

 

 

More helpful videos on GNS3 v2.0:

 

Our distributor in Germany has created French, German and Spanish language packs. The language packs are not a full versions. If you already installed the English version, then copy (replace) the language pack files into the program installation directory:

 

German v12.0.4 files:

French v12.0.4 files:

Spanish v12.0.3 files:

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.