Monitoring Central

July 2017 Previous month Next month

We’ve been listening to our SolarWinds® IP Address Manager (IPAM) customers who have ventured down the path of cloud automation, and we would like to share with you a new solution from SovLabs. It’s geared toward solving end-to-end IP address management for vRealize Automation (vRA).

 

The issue

vRA is widely used to provide self-service automation for infrastructure provisioning. One of the gaps in self-service is finding the next available IP address to assign to a new virtual machine. This requires a workflow that involves changing tools and manually looking for an address, which can be time-consuming and error-prone.

 

How does SovLabs help?

The SovLabs® vRA Integration Pack for SolarWinds consists of IPAM and DNS integration modules for VMware® vRealize® Automation based on the new SolarWinds IPAM API. The modules bring a simplistic approach to integrating SolarWinds IPAM with vRA. Combining IPAM with the SovLabs vRA Integration Pack enables a fully automated method of obtaining and releasing IP addresses as well as DNS record creation and removal as the cloud environment dynamically scales.   IP subnets can now easily be shared between vRA deployments alongside existing tools/devices with little fear of IP address conflicts.

 

SovLabs IPAM and DNS modules eliminate the pain of building and managing custom workflows by simplifying the integration between SolarWinds IPAM and vRealize Automation using a software-driven approach.  The modules share a built-in template engine that allows for dynamic data to be injected into endpoint definitions and configurations. Need to customize the comments field for IPAM records using vRA metadata?  Here’s an example of how to dynamically generate the comments using vRA properties via the SovLabs Template Engine configured on the SovLabs SolarWinds IPAM endpoint:

 

This comment template:

Reserved by {{ownerName}} on {{creationDate}} via vRA {{plugins.vCAC}} using blueprint {{blueprintName}} (NIC# {{SovLabsIPAMProfile.nic}})

 

Is rendered and inserted as a comment during VM provisioning/IP assignment:

Reserved by fred@sovlabs.com on 2016-10-07T14:23:38.360 via vRA 7.3.0 using blueprint Win2012R2 Prd (NIC# 0)

 

Getting started

To get started, create a SolarWinds endpoint, then create/link to an IPAM profile and DNS configuration, and finally associate to the blueprint – all directly in vRA.

 

 

 

For more information on the SovLabs vRA Integration Pack for SolarWinds and to request a free trial, visit the SovLabs website or email info@sovlabs.com.

 

The SolarWinds, SolarWinds & Design, Orion, and THWACK trademarks are the exclusive property of SolarWinds Worldwide, LLC or its affiliates, are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other SolarWinds trademarks, service marks, and logos may be common law marks or are registered or pending registration. All other trademarks mentioned herein are used for identification purposes only and are trademarks of (and may be registered trademarks of) their respective companies.

 

© 2017 SolarWinds Worldwide, LLC. All rights reserved.

SolarWinds® Port Scanner is a standalone free tool that can be used in various ways to identify the ports running on your network. It also helps unveil network vulnerabilities.

This versatile tool has many applications. Check out the ideas shared in this post, and let us know in the comments below how you use Port Scanner.

***

Idea 1: Use Port Scanner to run a security analysis

A security engineer would like to see how vulnerable his network is by performing an analysis of open and closed ports within his network. By running SolarWinds Port Scanner, he is able to scan IP addresses and their corresponding TCP and UDP ports. In doing so, he can verify if the ports that are supposed to be filtered are, in fact, filtered.

Once he establishes whether the corresponding ports are open, he can run a security analysis using Port Scanner and receive the status of all the TCP and UDP ports on his network. If he finds an open port that is not supposed to be open, he can go into his firewall or router to disable traffic on that port.  

 

Idea 2: Run the CLI to export results

Network administrators must understand the peaks of IP usage within their network to see if they will still have IP addresses available during peak hours. To achieve that, the network administrator must run recurring scans to see the differences in IP usage.

Using Windows® Scheduler to run the command line interface (CLI) of SolarWinds Port Scanner every 15 minutes, network admins can export the results to a CSV file. After that, he can run a PowerShell® script to compare the results from all of the CSV files. This will give him a clear understanding of IP usage within his network, which is critical to his job. Without this information, it is nearly impossible to maintain a secure network with optimal performance.  

 

Idea 3: Use Port Scanner to detect rogue devices

Network administrators need to know if only whitelisted devices are connecting to their Wi-Fi network. To achieve that, he needs to run recurring scans to see the differences in host names and MAC addresses. 

To do this, the network admin can use the Windows Scheduler to run the CLI of SolarWinds Port Scanner every 15 minutes and export the results to an XML file. He can then run a PowerShell script to compare the results from all of the XML files, which will give him a clear understanding of the devices connecting to his network. If he finds any rogue devices, he can simply disable them from his wireless controller.   

***

We hope you find Port Scanner to be a useful free tool, one of many new SolarWinds free tools to come. How will you discover your network with SolarWinds Port Scanner?

It seems DevOps is the new cool thing in IT. Sometimes it feels like DevOps is an amorphous thing that only cloud people can play with. For many of us who come from the client-server era, it can be intimidating.

 

We know DevOps can be defined in many ways. It can be thought of as a mindset, a methodology, or a set of tools. In this post, I offer a definition of DevOps by breaking the concept down into seven fundamental principles.

 

Implementing DevOps is very complex, requires new tools, new skills, and new processes. It’s often only possible for development and operation teams who are working together on cloud architecture software. I am excited about these seven principles because they can be applied in any IT organization.

 

Embracing these seven principles might enable your team to grow more agile, more responsive to business needs, and better able to meet expectations. The combination of these principles represents the mindset that companies are trying to hire for, and the mindset that is required to make the best use of cloud technologies, too.

 

These are the seven principles that define DevOps that you can integrate into your IT operations team:

 

  1. 1. Application and End-user Focus – Everyone on the team is focused on how their end-users and applications are impacted. The infrastructure is only there to make the application work.
  2. 2. Collaboration – Because the focus is on the end-user, silos do not work. If the app is down, everyone has failed. There are no virtualization problems or isolated storage issues. There is only one team: the one responsible for the app to work. This requires transparency, visibility, a consistent set of tools, and teamwork that supports applications across the entire technology stack.
  3. 3. Performance Orientation – Performance is a requirement and a core skill across the team. Performance is measured, all the time, everywhere. Bottlenecks and contentions are well understood. Performance is an SLA. It’s critical to the end-user experience. Everyone understands the direct relationship between performance, resource utilization, and cost.
  4. 4. Speed – Taking agile one step further, shorter, iterative processes allow teams to move faster, innovate, and serve the business more effectively.
  5. 5. Service orientation – No monolithic apps. Everything is a service, from application components to infrastructure. Everything is flexible and ready to scale or change.
  6. 6. Automation - To move faster, code, deployments, tests, monitoring, alerts; everything is automated. That includes IT services. Embrace self-service for your users and focus on what matters.
  7. 7. Monitor everything – Visibility is critical for speed and collaboration. Monitoring is a requirement and a discipline. Everything is tested, the impact of every change is known.

 

For more details, I invite you to read the full presentation:

The 7 Principles of DevOps and Cloud Applications

[slideshare id=56581871&doc=the7disciplinesofdevopsandcloudaplications2-151231191647]

The term “cloud” has stopped being useful as a description of a specific technology and business model. Everything, it seems, has some element of cloudiness. The definition of cloud versus on-premises has blurred.

 

It’s only been eight years since Gartner® defined the five attributes of cloud computing: scalable and elastic, service-based, shared, metered by use, uses internet technologies. Shortly after that, Forrester® defined the cloud as standard IT delivered via internet technologies on a pay per use, self-service model.

 

What we call on-premises is most often virtualized, dynamically provisioned infrastructure on a co-location hosting facility, programmable by software. Clouds now offer long-term, bare-metal, prepaid-for-the-year infrastructure, and private/dedicated infrastructure.

 

Organizations have learned that there is a place for cloud-hosted resources and a place for on-premises resources. The best analogy I have is this: there is a time when you want to buy a car and there is a time when you want to rent a car (or get a taxi). As Lew Moorman, president of Rackspace® told me many years ago, “the cloud is for everyone, but not for everything.”

 

It’s undeniable that every IT department is adopting the cloud, but it is also becoming increasingly clear that on-premises IT is not going away. Most companies will end up with a combination of the two. But how?

 

For the near future, there are mainly three broad ways to consume cloud by IT departments:

  • SaaS – From SalesForce® to Netsuite® and Office 365®
  • “Lift and Shift” – Where the architecture stays the same, and you only migrate the workloads to be hosted on a cloud
  • “Cloud first,” which takes full advantage of cloud architecture and services. This model is only viable for net new projects and for those where it makes sense to invest in writing or re-writing apps from the ground up.

 

The reason I bring this up is because when it comes to monitoring, application architecture is more important than where things are hosted.

 

A standard three-tier architecture application like SharePoint® on AWS® needs to be monitored essentially the same way it is monitored on-premises, or in a co-location environment. Conversely, a cloud-architected application (service-oriented, dynamically provisioned, horizontally scaled, etc.) will require a different monitoring approach, whether it is hosted on a public or a private cloud.

 

The key point is that cloud is quickly becoming irrelevant as a term. No one says they have an electronic calculator or a digital computer anymore.

 

We need to start using more specific terms that are more meaningful and useful, such as cloud services, cloud architecture, or cloud hosting – not just cloud.

If your organization is based in the EU, or provide goods or services to the EU, you’ve probably heard a lot about the General Data Protection Regulation (GDPR) compliance lately. In this post, I’d like to educate the THWACK® community on some of the GDPR requirements and how SolarWinds products such as Log & Event Manager (LEM) can assist with GDPR compliance.

 

Why the need for GDPR?

In December 2015, the EU announced that the GDPR was being implemented in place of the Data Protection Directive (DPD), the current EU data laws. The DPD was first established over 20 years ago, but it has not kept up with the seismic changes in information technology and is no longer sufficient for today’s technologies and threats. The shortcomings of the DPD have become apparent and the EU saw the need to replace it.

 

The shift from directive to regulation

A defining change which comes with the launch of GDPR is a shift from a directive to a regulation. DPD was a directive, meaning a set of rules issued to member states, but each country can interpret and implement the rules differently. GDPR is a regulation, which requires countries to implement the regulation without any scope for varying interpretations. It removes any ambiguities on organizations’ data protection responsibilities. GDPR paves the way for data privacy as a fundamental right for EU citizens. The implementation deadline for the regulation is May 25, 2018, so organizations are certainly against the clock to implement the necessary policies, procedures, and systems to ensure they are compliant.

 

 

What exactly is personal data?

GDPR defines a very broad spectrum of personal data. Personal data is no longer limited to information such as name, email, address, phone number, etc. GDPR also classifies online identifiers such as IP addresses, web cookies, and unique device identifiers such as personal data. Even pseudonymous data is included. This is personal data which has been technically modified in some way, such as hashed or encrypted. Worth noting that the rules are slightly relaxed for data that is pseudonymized, which provides an incentive for organizations to encrypt or hash their data. GDPR defines personal data as “data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade-union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health, or data concerning a natural person’s *** life or sexual orientation.” (GDPR Article 9, page 124)

 

 

My organization is not based in the EU—why should I care about GDPR?

Although it is an EU regulation, it is not limited to the EU. GDPR will affect organizations on a global scale. The regulation will apply to any organization that offers goods or services to EU citizens. If a company based outside the EU is storing, managing, or processing personal data belonging to EU citizens, they will need to ensure GDPR compliance (GDPR Article 3, page 110). According to a recent PwC study, a staggering 92% of US multinational companies have listed GDPR compliance as data-privacy priority. A significant percentage of those organization plan to spend $1 million or more on GDPR.

 

Data controllers vs. data processors

Controller – “The natural or legal person, public authority, agency, or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data.”

 

Processor – “A natural or legal person, public authority, agency, or other body which processes personal data on behalf of the controller.”  (Article 4, GDPR page 112)

 

Under the DPD, data processers had very little responsibilities to company, whereas GDPR places joint responsibility for both data controllers and data processors to comply with the regulation. As an example, if an organization (controller) outsources its payroll to an external payroll company (processor), even though the payroll company is managing and storing data on behalf of the controller, they are now required to comply with GDPR. This will impact controllers and processors alike. Controllers will have to conduct reviews to ensure their processors have a framework in place to comply with GDPR. Processors will have to ensure they are compliant. 

 

Data breach notification – GDPR Article 33 (page 53)

The Data Protection Directive didn’t require organizations to notify authorities of any data breaches. GDPR defines a personal data breach as the “accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to personal data transmitted, stored, or otherwise processed.” It’s worth remembering that personal data now includes IP addresses, web cookies, unique devices identifiers, and more. The GDPR also now requires organizations (or controllers, as they are known in GDPR) to report data breaches within 72 hours. If this deadline is not met, you will have to explain the reasons for the delay. If you are a data processor, you must report the breach to the controller. The controller then notifies the “supervisory authority.”  Data subjects must also be informed when a breach poses a high risk to their rights and freedoms. However, if the controller had implemented protection measures such as encryption on the data, then the data subject’s rights and freedoms are unlikely to be at risk.

 

Individual rights

GDPR provides EU citizens with increase personal data rights. Just some of these individual rights include Consent (Article 7) , Right to Erasure (Article 17), and Data Portability (Article 20).

 

Organizations will require consent when collecting personal data of EU citizens. The type of data and retention period will need to be stated in plain language that citizens can clearly understand. Data controllers will be required to prove that consent has been provided by the subject.

 

Individuals also have the right to erasure, meaning subjects can request controllers to delete all information about them, provided the controller has no reason to further process the data. There are exceptions if the data is used for legal obligations—for example, financial institutions are legally obliged to retain data for a certain period of time. If a data controller has shared personal data with third parties, the onus is on the controller to inform those third parties of the data subjects request to erase the data.

 

Data Portability allows data subjects to receive the personal data they provided to a data controller in a structured, “machine-readable” format. This portability facilitates data subjects’ ability to move, copy, or transmit data easily from one service provider to another.

 

What happens if we don’t comply?

If your organization is not compliant with GDPR, it can receive fines of up to €20 million or 4% of global annual turnover for the preceding financial year (whichever is greater). These penalties apply to both data controllers and processors. (Article 83, section 5)

 

 

How can SolarWinds help?

GDPR will likely require organizations to implement new policies, procedures, controls, and technologies—it may even require you to hire a Data Protection Officer, in certain cases. While no single technology can meet all the requirements of GDPR, SolarWinds can certainly assist with some of the requirements.

 

Article 32: Security of processing

This section of GDPR requires organizations to “implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk.” SolarWinds® Patch Manager can be used to identify and update missing patches and outdated third-party software on your Windows® servers and workstations. Patch Manager also enables you to inventory your Windows® machines and report on unauthorized software installations on your network.

 

Article 32 also requires “regular testing the effectiveness of technical measures for ensuring security of the processing.” SolarWinds LEM can be used to validate the controls you have put in place.

 

Please see here for more information: Article 32

 

Article 33 and 34: Notification of a personal data breach to the supervisory authority and communication of a personal data breach to the data subject

SolarWinds Risk Intelligence (RI) is a product that performs a scan to discover personally identifiable information across your systems and points out potential vulnerabilities that could lead to a data breach. RI can audit PII data to help ensure it is being stored, in accordance to the requirements of GDPR. The reports from RI can be helpful in providing evidence of due diligence when it comes to the storage and security of PII data.

 

As mentioned previously, if a personal data breach occurs, the controller must notify the supervisory authority within 72 hours. It is vital that breaches and threats are identified as quickly as possible.

 

LEM can assist with the detection of potential breaches thanks to features such as correlation rules and Threat Feed Intelligence. LEM’s File Integrity Monitoring and USB Defender® can monitor for any suspicious file activity and also the detect the use of unauthorized USB removable media. If an incident occurs, LEM’s nDepth feature can be leveraged to perform historical analysis. LEM also includes best practice reporting templates to assist with compliance reporting.

 

Please see here for more information: Article 33 and Article 34

 

 

“The implementation deadline for the regulation is May 25, 2018.”

 

The GDPR deadline is fast approaching. GDPR compliance will require significant effort from both data controllers and processors. There are several steps required to get started with GDPR, which include (but are not limited to) performing an analysis of what personal data your organization stores and where it’s stored, reviewing existing IT security policies and procedures, and ensuring you have the necessary technological and organizational procedures in place to detect, report, and investigate personal data breaches.

 

I am very interested in hearing opinions and how members of the THWACK community are preparing for GDPR. Please feel free to provide comments below.

 

To learn about SolarWinds portfolio of IT security software, please see here.

SolarWinds THWACK® community has grown to become one of the largest and most active communities for IT professionals, expecting about two million unique visitors this year alone.

 

We see it as a great opportunity to have a conversation and to connect.

 

IT is changing all the time. That’s what makes it such an interesting industry. SolarWinds® solutions have been changing, too. In addition to our traditional product line, powered by the Orion® Platform, SolarWinds now offers a remote monitoring product line for MSPs, and a portfolio of cloud monitoring products for DevOps teams building cloud-first applications.

 

This makes it more important than ever that we have a space to connect with customers and with the IT industry. This is that space.

 

Monitoring Central complements our two other blog communities on THWACK: Geek Speak, where you can read opinions from industry thought leaders, and the Product Blog, where you find out about product updates and new releases.

 

Monitoring Central is a new space to talk about all things monitoring.

 

We invite you to participate, ask questions, voice your opinions, and actively participate in this blog. For example, write a comment below suggesting any topics you would like to hear about.

 

We look forward to the conversation.

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.