alrasheed

What Is VMware NSX?

Posted by alrasheed Oct 1, 2019

When purchasing Nicira in 2012, VMware wanted to further establish itself as a leader in software-defined networking, and more importantly, in cloud computing. VMware NSX provides the flexibility network administrators have long desired without having to rely solely on network hardware and its primary roles include automation, multi-cloud networking, security, and micro-segmentation. Additionally, the time spent provisioning a network is dramatically reduced.

 

NSX, the VMware software-defined networking (SDN) platform, lets you create virtual networks, including ports, switches, firewalls, and routers, without having to rely on a physical networking infrastructure. It’s software-based, nothing physical is involved, but the networks created can be seen from a virtual perspective. Simply put, the NSX network hypervisor allows network administrators to create and manage virtualized networks. Virtual networking allows communication without the use of network cables across various devices, including computers, virtual machines, and servers, using software.

 

However, virtual networks are provisioned, configured, and managed using the underlying physical network hardware. It’s important to keep this in mind, and it’s not my intention to say otherwise or mislead anyone.

 

Given the choice between a physical and software-defined network infrastructure, I prefer SDN. Physical networking devices and the software built into them depreciate over time. These devices also take up space in your data center—and the electricity needed to keep them powered on. Also, time is money, and it takes time to configure these devices.

 

Software-defined networks are easier on the eyes, which includes not having to worry about network cables or a cable management solution. Does anyone truly find the image below appealing?

 

Poor Cable Management

 

Virtual networking provides the same capabilities as a physical network, but with more flexibility and more efficiency across multiple locations without having to replace hardware or comprising reliability, availability, and security. Devices are connected over software using a vSwitch. Communications between the various devices can be shared on one network or separately on a different network. With software-defined networks, templates can be created and modified as needed without having to reinvent the wheel as you would with a physical networking device.

 

NSX-T is VMware’s next-generation SDN, which includes the same features as its predecessor NSX-V. The main difference is its ability to run on different hypervisors without having to rely on a vCenter Server.

 

VMware NSX editions include Standard, Professional, Advanced, Enterprise Plus, and Remote Office Branch Office (ROBO), each with a unique case. For example, Standard provides automated networking while Professional includes this as well plus Micro-Segmentation. For detailed information about each edition, please review the NSX datasheet here.

 

If you’re interested in learning more about NSX, VMware provides an array of options to choose from, including training, certifications, and hands-on labs.

Here’s the fifth part of the series I’ve been writing regarding HCI. Again, I don’t want to sway my audience from one platform to another. Rather, I want them to follow the edict caveat emptor. It’s mission-critical for the future of the environment we’re talking about here for the buyer to be aware of the available options. You need to evaluate the functionality you require as well as those covered in the existing products and their roadmaps, and determine which platform best satisfies your needs.

 

In some cases, the “business as usual” approach may work best—the virtualization platform, whether VMware, Xen Server, KVM, or OpenStack, and even some container-based platforms, must be maintained, expanded, or leveraged for the scalability and approach the business requires. To be clear, most of what can be supported on HCI may be accomplished (within certain parameters) on a more piecemeal virtualization approach. For example, should you choose to use Acropolis as your virtualization platform, you’ll require a Nutanix environment. HyperCore (a KVM-based hypervisor) will point you toward Scale Computing. Again, the goal here is to ensure the alternative hypervisor satisfies the needs. At one point, I had heard a significant percentage of the customers who began with Acropolis subsequently migrated toward VMware. I’m unsure if there are stats on where it stands today, but it’s important to note. Remember, this is no disparagement of Acropolis, but it’s not necessarily right for everyone. The relevancy of this previous statement, of course, is to know your virtualization requirements, and try to ensure you’re covering those bases.

 

I’ve stressed scalability from both within (storage horsepower versus CPU horsepower), and how these environments can expand from within these categories, or not. It may or may not be relevant for the customer’s needs to choose a platform capable of expanding incrementally in those categories. Should it be relevant, the choice is beholden upon the customer to ensure they’ve chosen the correct platform, because the cost of these choices can be profound.

 

I’ve also stressed how the storage environment within an HCI platform may or may not have features important to the overall strategy. Are deduplication, compression, and replication part of your requirement set? If so, surely this need will affect your choice. I want to stress how important this requirement can be, particularly to overall backup/recovery/DR needs. In this case, it’s important for the customer to take these issues into account. Knowing your concerns will determine whether this is relevant for your strategy.

 

Remember, the goal here is to build a platform to satisfy these requirements for the entire depreciation period, so you’ll be able to support your requirements until the equipment reaches end of life.

 

With no bias toward or against any platform, the true value of how you choose to go with this, or even if you choose not to go with this, will be determined by scalability, management, storage, and ultimately how functional the environment is moving forward.

In the first two parts of this series, we looked at both the history of virtualization and the new set of problems it’s introduced into our infrastructures.

 

In part three, we’ll investigate its evolution and how it will continue to be part of our technology deployments.

 

A Software-Defined Future

 

The future for virtualization, in my opinion, comes in looking beyond our traditional view.

Thinking about its basic concept could give us a clue to its future. At its base level, server virtualization takes an environment made up of applications and operating systems installed on a specific hardware platform and allows us to extract the environment from the hardware and turn it into something software-based only, something software-defined.

 

Software-defined is a growing movement in modern IT infrastructure deployments. Extracting all elements of our infrastructure from the underlying hardware is key when we want to deploy at speed, at scale, and with the flexibility to operate infrastructure in multiple and differing locations. For this to work we need to software-define beyond our servers.

 

Software Defining Our Infrastructure

 

Storage and networking are now widely software-defined, be it by specialist players or major vendors. They’ve realized the value of taking things previously tied to custom hardware and packaging them to be quickly and easily deployed on any compatible hardware.

 

Why is this useful? If we look at what we want from our infrastructure today, much of it has been defined by what we see of how hyperscale cloud providers deliver infrastructure. None of us knows, or really cares, about what sits under the covers of our cloud-based infrastructure—our interest is only in what it delivers. If our cloud provider swaps its entire hardware stack overnight, we wouldn’t know, but if our infrastructure continued to deliver the outcomes we had designed it for, it wouldn’t matter.

 

Without software-defining our entire stack, there’s little chance we can deploy on-premises with the same speed and scale seen in cloud, making it difficult for us to operate the way businesses increasingly demand.

 

Is Software-Defined Virtualization?

 

This article may raise the question, “Is software-defined really virtualization?” In my opinion, it certainly is. As discussed earlier, virtualization is the separation of software from hardware dependency, providing the flexibility to install the software workload on any compatible hardware. This really is the definition of software-defined, be it storage, networking, or more traditional servers.

 

The Benefits of Software-Defined

 

If virtual, software-defined infrastructures are to continue to be relevant, they need to be able to meet modern and future demands.

 

The infrastructure challenges within the modern enterprise are complex, and we’ve needed to change the way we approach infrastructure deployment. We need to respond more quickly to new demands, and custom hardware restrictions will limit our ability to do so.

 

Virtualizing our entire infrastructure means we can deliver at speed and with consistency, in any location, on any compatible hardware, with the portability to move it as needed for performance and scale, without disruption. All this is at the core of a successful modern infrastructure.

 

In the next part of this series, we’ll look at how infrastructure deployment is developing to take advantage of software-defined and how methodologies such as infrastructure as code are essential to our ability to deliver consistent infrastructure, at scale and speed.

Hi there! It's Suzanne again. You might remember me from The Actuator April 10th, where I stepped in for Tom because he was busy "doing things." He's on his way to yet another conference and asked me to help out, as if I don't have enough things to do while he's away. Of course I agreed to do it, but not before I made him promise to build me a fire outside and serve me a cocktail.

 

So, here are some links I found interesting this week. Hope you enjoy!

 

People v mosquitos: what to do about our biggest killer

As I sit here in our yard, swatting away mosquitoes, I think it's time for us to eradicate them from the face of the Earth. And if this process involves flamethrowers, sign me up.

 

Seven Ways Telecommuting Has Changed Real Estate

As someone who managed a co-working space and now works from home as Director of Lead Generation for a real estate team, every one of these points rings true.

 

WeWork unsecured WiFi exposes documents

Speaking of co-working spaces, WeWork shows how to not do network security properly. I bet the printers in the office are storing every page scanned too! Oh, WeWork (sigh).

 

The true magic of writing happens in the third draft

For me, the true magic of writing happens during the third cocktail.

 

Google Says It's Achieved Quantum Supremacy, a World-First: Report

Tom keeps mumbling to me about quantum computing, so I'll include this for him. I'm not worried about Google achieving this, because it's likely they'll kill the product in less than 18 months.

 

What to Consider About Campus Safety, Wellness

As we start touring campuses with our children, these types of questions become important. Is it wrong to expect your 18-year-old (who's away at school) to check in with you daily? Asking for a friend.

 

7 Reasons Why Women Speakers Say No to Speaking & What Conference Organizers Can Do About It

Second of a 2-part series that talks about why women turn down speaking engagements. I remember the time Tom arranged for an all-women speaking event, 24 women speakers in total. It took longer to arrange, and the process was more involved, but I was proud he made the effort.

 

Out for a morning walk last week and we stumbled upon this beautiful view of a pond, with steam rising off. #Exhale

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article from my colleague Jim Hansen about the Navy’s new cybersecurity program. There’s no doubt our troops rely on technology and cyberthreats are increasing.

 

The Navy’s new Combat to Connect in 24 Hours (C2C24) is an ambitious program with the potential to change naval warfare as we know it.

 

The program is designed to improve operational efficiency by automating the Navy’s risk management framework (RMF) efforts; providing sailors with near real-time access to critical data; and accelerating the Navy’s ability to deploy new applications in 24 hours rather than the typical 18 months.

 

C2C24 is using open-source technologies and a unique cloud infrastructure to reduce the network attack surface and vulnerabilities. The Navy is standardizing its network infrastructure and data on open-source code and using a combination of shore-based commercial cloud and on-ship “micro cloud” for information access and sharing.

 

But malicious nation states are continually seeking ways to compromise defense systems—and they tend to be able to react and adjust quickly. As Navy Rear Adm. Danelle Barrett said, “Our adversaries don’t operate on our POM (program objective memorandum) cycle.”

 

With its ship-to-shore infrastructure, C2C24 could provide an enticing target. To complete its C2C24 mission, the Navy should pay special attention to the final two phases of the RMF: information system authorization and security controls monitoring.

 

Knowing Who, When, and Where

 

With C2C24, roughly 80 percent of mission-critical data will be stored on the ship. This will allow personnel to make operational decisions in real time without having to go back to the shore-based cloud to get the information they need at a moment’s notice.

 

But what if someone were to compromise the onshore cloud environment? Could they then also gain access to the ship’s micro cloud and, by extension, the ship itself?

 

It’s important for personnel to be notified immediately of a possible problem and be able to pinpoint the source of the issue so it can be quickly remediated. They need to see precisely what’s happening on the network, whether the activity is happening onshore, onboard the ship, or over the Consolidated Afloat Networks and Enterprise Services (CANES) system, which the Navy intends to use to deliver C2C24.

 

They also need to be able to control and detect who’s accessing the network. This can be achieved through controls like single sign-on and access rights management. Security and event management strategies can be used to track suspicious activity and trace it back to internet protocol addresses, devices, and more.

 

In short, it’s not just about getting tools and information quickly, but about thinking of the entire RMF lifecycle, from end to end. In the beginning, it’s about understanding the type of information being processed, where it’s stored, and how it’s transmitted. In the end, it’s about controlling access to information and monitoring it.

 

This is particularly important on a shipboard environment where information means different things to different people. A person managing course corrections will need access to a particular data set, while someone managing weapons targeting may need different data altogether.

 

Controlling and monitoring the information flow is paramount to making sure data stays in the right hands. Further, ensuring the data is the expected data and not misinformation injected into the system by bad actors who have compromised the infrastructure is equally important.

 

Malicious attackers aren’t the only threat.

 

Security is not the only concern. One of the core goals of C2C24 is to make the Navy’s operations run more efficiently. Information and applications are to be obtained more quickly so warfighters have what they need in a more expedited manner.

 

But different incidents can undermine this effort. A commercial cloud failure or lost satellite connectivity could play havoc with a ship’s ability to receive and send information to and from shore. These issues can compromise commanders’ abilities to make decisions that can affect current and future operations.

 

Thus, it’s just as important to keep tabs on network performance as it is to check for potentially malicious activity. Commanders must be alerted to network slowdowns or failures immediately. Meanwhile, personnel must have visibility into the source of these issues so they can be quickly rectified and the network can be restored to an operational state.

 

Fortunately, the fact the Navy is basing C2C24 on a standardized infrastructure open source tool makes this easier. It’s simpler to monitor a single set of standardized network ports, for example, than it is to monitor non-standardized ports and access points. And an open source infrastructure lays the groundwork for any number of monitoring solutions to provide better visibility and network security.

 

This standardization makes C2C24 a visionary program with the potential to redefine the Navy’s ability to adapt quickly to any situation and significantly improve its security posture. Warfighters will have the right information and applications much faster than before, and data security will be greatly improved—particularly if a government network monitoring solution is made an instrumental part of the effort.

 

Find the full article on SIGNAL.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

alrasheed

The vCommunity

Posted by alrasheed Sep 24, 2019

We all want to be part of something special and have a common bond. It includes doing what’s best for you but also keeping others in mind to provide them the support they need to be successful in life, professionally and personally. Teams are defined by the sacrifices we make for each other in hopes of succeeding in a manner beneficial to everyone, while also providing an experience to lead to a positive and fruitful relationship for years to come.

 

We’ve all been there. The feeling of hopelessness. No matter what you do or say, and regardless of your contributions, you feel neglected or underappreciated. It creates a feeling of emptiness, as if you don’t belong or you’re on the outside looking in. Regardless of your profession, we each have a set of standards, morals, and core values in place which we should adhere to.

 

These neglected and underappreciated sentiments described how I felt in my career roughly two years ago. It was a constant feeling of always being at the bottom of the proverbial totem pole. A “take one step forward and two steps back” mentality. The approach of taking the high road felt like a dead end or being stuck in a roundabout where only left turns are permitted, like a NASCAR race, but considerably less fun.

 

When I discovered the vCommunity, everything changed. There was light at the end of the tunnel. There was hope I never realized existed, and it changed my outlook on my professional career and helped guide me personally. The vCommunity shares the same values I preach, like unity, joint ownership, team, generosity, and most importantly, that kindness matters. You get what you put into it, and I can assure you it’s been a blessing in disguise I wish I had discovered years ago.

 

The vCommunity includes individuals, groups, organizations, and everyday people located across the globe in areas you’d never imagine. But it doesn’t matter because we serve one purpose—to help one another as best as possible. A simple five-minute conversation has the potential to turn into something special for both parties.

 

I’ve shared my experiences, and each has provided wonderful returns. Examples include tech advocacy programs like vExpert, Cisco Champion, Veeam Vanguard, and the Nutanix Technology Champions. I’ve had the pleasure to be introduced to wonderful people in the IT industry from across the globe thanks to my good friends at Tech Field Day and Gestalt IT. I’m now considered an independent influencer who has a passion for connecting people with technology through blogging.

 

None of this would be possible without the aforementioned groups, and there’s no chance I’d consider myself a blogger without their support.

 

There are additional methods to connect with fellow vCommunity members, and they don’t have to include any association with a group or program. How so, you ask? By simply using your voice and connecting with podcasters to share your stories and experiences. You’d be surprised how much of an influence this can be for someone. I’ve had the pleasure to join Datanauts, vGigacast, Virtual Speaking Podcast, Gestalt IT, and Technically Religious. Each has provided me with a platform to help others and the feedback has been tremendous. My biggest take is the influence it has had on others. I’m humbled to know I’ve had a positive effect on someone.

 

Additionally, I recommend the following podcasts because they provide quality content with valuable information and resources. They include Cisco Champion Radio, The CTO Advisor, DiscoPosse Podcast, Nerd Journey Podcast, Nutanix Community Podcast, Packet Pushers Community Show, Real Job Talk, Tech Village Podcast, The VCDX Podcast, ExploreVM Podcast, Veeam Community Podcast, VMUG Professional Podcast, Virtual Design Master, and the VMware Communities Podcast.

 

Let’s recap and discuss why I’ve taken the time to share this with you. I want you to grow, be empowered, and be successful. It’s my goal to help someone achieve these goals by providing any assistance possible. #GivingBack should a requirement because nobody can achieve success without assistance from someone. For me, Jorge Torres and William Lam led me down the path to this point, and I’ll always owe them for believing in me.

 

I realize there are plenty of examples of giving back and I wish I could acknowledge every one of you, but you know who you are, and I thank you for it. The moral of the story is be happy, give back, and you’ll be rewarded for your contributions and dedication to the #vCommunity. Lead by example and others will follow.

 

“For fate has a way of charting its own course, but before one surrenders to the hands of destiny, one might consider the power of the human spirit and the force that lies in one’s own free will.” Lost: The Final Chapter

This is the fourth post of my series on hyperconverged infrastructure (HCI) architectural design and decision-making. For my money, the differences between these diverse systems is a function of the storage involved in the design. On the compute side, these environments use x86 and a hypervisor to create a cluster of hosts to support a virtual machine environment. Beyond some nuances in the hardware-based models, networking tends toward a similar approach in each. But often, the storage infrastructure is a differentiator.

 

Back in 2008, LeftHand Networks (later acquired by HPE) introduced the concept of a virtual storage appliance. In this model, the storage would reside within the ESX servers in the cluster, become aggregated as a virtual iSCSI SAN, and allow for redundancy through the nodes. Should an ESX host crash, with the standard function of the VMs rebooting on a different host in the cluster, the storage would allow for consistency regardless. By today’s standards, it’s not at all inelegant, but lacks some of the functionality of, for example, vSAN. VMware vSAN follows a similar model, but can also incorporate deduplication, hybrid or all solid-state disc, and compression. To me, vSAN used in vSAN-ready nodes, also a component of Dell/EMC VxRail product, is a modernized version of what LeftHand brought to the table some 11 years ago. It’s a great model and eliminates the need for a company to build a virtualized infrastructure to purchase a more traditional SAN/NAS infrastructure to connect the virtualized environment. Cost savings and management make this more cost-effective.

 

Other companies in the space have leveraged the server-based storage model. The two that spring most rapidly to mind are Nutanix and Simplivity, who have built solutions based on packaged single SKUed boxes built around a similar model. Of course, the way to manage the environments are different, but support the goal of managing a virtual landscape with some aspects of differentiation (Nutanix supports their hypervisor, Acropolis, which nobody else does). From a hardware perspective, the concept of packaged equipment sized to manage a particular environment is practically the same: x86 servers run the hypervisor, with storage internal to each node of the cluster.

 

I’ve talked previously about some of the scalability issues that may or may not affect end users, so I won’t go deeper into it here. Feel free to check out some of my previous posts about cluster scalability issues causing consternation about growth.

 

But storage issues are still key, regardless of the platform you choose. I believe it’s one of only two or three issues of primary concern. While compression, deduplication, and the efficiency of how SSD is incorporated are key to using storage, there’s more. One of the keys to backing up the data in a major use-case for HCI, the hub-and-spoke approach in which the HCI sits on the periphery and a more centralized data center resides as the hub, is the replication of all changed data from the remote to the hub, with storage awareness.

 

I feel many of the implementations I’ve been part of have had the HCI as ROBO (remote office/back office), VDI, or a key application role and require a forward-thinking approach to the backup of these datasets. If you, as the decision-maker, value that piece as well, look at how the new infrastructure would handle the data and be able to replicate it (hopefully with no performance impact to the system) so all data is easily recoverable.

 

When I enter these conversations, if the customer doesn’t concern themselves with backup or security from the ground-up, mistakes are being made. I try to emphasize this is likely the key consideration from the beginning. 

The first three blogs in this series were all about building a blueprint for a well-designed environment. In this article, we’ll review more practical considerations to influence the overall architecture and design of the ecosystem, which in turn require specific features and methodologies as dictated by the required data flows in our use cases. Many of these concepts may not appear to be focused on security but refer to basic networking and documentation. It’s surprising how many organizations lack a detailed knowledge of their underlying network design and the capabilities of its components, which is a must for the detection and mitigation of threats.

 

Logical design provides data segmentation, the first real step to a secure and resilient design. Sub-interfaces, VLANs, VRFs, virtual, and tunnel interfaces separate traffic and allow various forwarding and security methods to be applied to individual flows. Devices such as firewalls and intrusion prevention appliances may be physically connected to routers or switches; however, logical design features such as firewall contexts and virtual sensors handle the segmented flows.

 

Another key concept is the use of addresses and identifiers as the basis for implementing security policy rules and requirements. Address assignment methods should be controlled and secured. This is simpler for static assignment to resources that don’t change frequently. Dynamic assignment is required for transient users and devices and should be part of authentication and authorization of the end entity. If a DHCP server is used, secure the service using techniques such as DHCP snooping and dynamic ARP inspection. It’s important to track assigned addresses by associating MAC addresses to IPs to prevent spoofing. Using authentication methods such as 802.1X or MAB and tying them to device profiling and security posture assessments introduces the concept of authorized network access based not only on identity but also capabilities.

 

Allocated addresses may need to be translated to allow access to private services from a public network, or to hide the real address of a private resource. Be familiar with unidirectional versus bidirectional access when configuring NAT and PAT methods. If traffic flows are initiated using a translated address, a bidirectional method is required. This can also be combined with application port mapping, which forces connectivity via a non-standard port, hiding the real port. If address translation isn’t an option, but connectivity across a WAN or the internet to remote sites is required, consider tunneling methods such as IPv6-in-IPv4 or IPv4-in-IPv4, which may also be protected with IPsec. Role-based identifiers to add context to a security policy beyond topology dependent constructs such as IP address are also available. Some vendors offer identity-based firewalling, where username to IP address mapping is used to enforce policy.

 

Once end entities access the network, a solid understanding of routing protocols and packet forwarding techniques adds to overall security. Static routes can be used to redirect traffic for security reasons. Policy-based routing can also redirect or discard traffic as well as mark certain flows for priority handling. Be familiar with best practices for dynamic routing protocols as well as any security mechanisms associated with them. For example, authentication of routing updates via MD5 and TTL max hop limits for OSPF and BGP.

 

Understanding the services and functions important to network users and putting together a topology design helps define security policy elements. Enforcement techniques such as access lists, firewall rules, application security attack mitigations, and role-based access controls identify the security feature capabilities needed on network devices.

 

In keeping with best practices, several references are available to assist with secure design, including:

 

  • IETF standards-based BCPs (38), RFCs (1918, 3330, 2827, 3704)
  • Compliance best practices ISO Framework (27001, 27002), COBIT IT security standards
  • Well-known organization documents such as those by SANS and NIST
  • Vendor specific guidelines, recommendations, and limitations
  • Up-to-date vulnerability information from PSIRT, SNORT, and threat intel feeds

 

In the final blog of the series, we will review methods for monitoring and alerting that will be the barometer for measuring the success of our use case deployment.

I don't want to alarm you, but there are only 97 shopping days left until Christmas. Which explains why the local big-box hardware stores already have Christmas decorations out. I'm going to do my best to enjoy the wonderful autumn weather and not think about the snow, ice, and cold I know are heading my way.

 

As always, here are some links I hope you find interesting. Enjoy!

 

Mystery database left open turns out to be at heart of a huge Groupon ticket fraud ring

An interesting twist to the usual database-found-left-open-on-the-internet story.

 

LastPass fixes flaw that leaked your previously used credentials

If you are using LastPass, please update to the latest version.

 

3 Nonobvious Industries Blockchain is Likely to Affect

I'm not a fan of the Blockchain, but this article speaks about industries that make interesting use cases. Much more interesting than the typical food supply chain examples.

 

DNA Data Storage

It may be possible to fit all YouTube data in a teaspoon. That sounds great, but the article doesn't talk about how quickly you can retrieve this data.

 

Check the scope: Pen-testers nabbed, jailed in Iowa courthouse break-in attempt

Talk about having a bad day at work.

 

Amazon Quantum Ledger Database (QLDB) hits general availability

This is essentially a transaction log, one that grows forever in size, never gets backed up, and is never erased.

 

Now that MoviePass is dead, can we please start funding sensible businesses?

Probably not.

 

No, really, it's fine.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Mav Turner about the complexities of server and application management. Hyperconvergence makes the old question of, “is it the network or the application?” harder to answer; good thing we have tools to help.

 

According to a recent SolarWinds federal IT survey, nearly half of IT professionals feel their environments aren’t operating at optimal levels.

 

The more complex the app stack, the more servers are required—and the more challenging it can be to discover problems as they arise. These problems can range from the mundane to the alarming.

 

It can be difficult to determine the origin of the problem. Is it an app or a server? Identifying the cause requires being able to visualize the relationship between the two. To do this, administrators need more in-depth insights and visual analysis than traditional network monitoring provides.

 

The Relationship Between Applications and Servers

 

Today, applications and servers are closely entwined and can span multiple data centers, remote locations, and the cloud. Today’s virtualized environments make it harder to discern whether the error is the fault of the application or the server.

 

Administrators must be able to correlate communications processes between applications and servers. Essentially, to understand what applications and servers are “saying” to each other and monitor activities taking place between the two. This detailed understanding can help admins rapidly identify the cause of failures, so they can quickly respond.

 

Administrators should be able to monitor processes wherever they’re taking place—on-premises or in the cloud. As more agencies adopt hybrid IT infrastructures, keeping a close eye on in-house and hosted applications from a single dashboard will be imperative. Administrators need a complete view of their applications and servers, regardless of location, if they want to quickly identify and respond to issues.

 

A Deeper Level of Detail

 

Think of traditional monitoring as providing a broad overview of network operations and functionality. It’s like an X-ray taking a wide-angle view of an entire section of a person’s body, providing invaluable insights for detecting problems.

 

Application and server monitoring is more like a CT scan focusing on a particular spot and illuminating otherwise undetectable issues. Administrators can collect data regarding application and server performance and visualize specific application connections to quickly identify issues related to packet loss, latency, and more.

 

The Benefits of a Deeper Understanding

 

Greater visibility allows administrators to pinpoint the source of problems and respond faster. This can save time and headaches, freeing up time to work on more mission-critical tasks and move their agencies forward.

 

Gaining a deeper and more detailed understanding of the interdependencies between applications and servers, as well as overall application performance, can also help address network optimization concerns. Less downtime means a better user experience, and fewer calls into IT: a win-win for everyone.

 

Growing Complexity Requires an Evolution in Monitoring

 

Federal IT complexity will to continue to grow. App stacks will become taller, and more servers will be added.

 

Network monitoring practices must evolve to keep up with this complexity. A more complex network requires a deeper and more detailed government network monitoring approach with administrators looking closely into the status of their applications and servers. If they can gain this perspective, they’ll be able to successfully optimize even the most complex network architectures.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Hyperconverged infrastructure has become a widely adopted approach to data center architecture. With so many moving parts involved, it can be difficult to keep up with the speed at which everything evolves. Your organization may require you to be up to speed with the latest and greatest technologies, especially when they decide to adopt a brand-new hyperconverged infrastructure. The question then arises, how do you become a hyperconverged infrastructure expert? There are many ways to become an expert, depending on the technology or vendor your organization chooses. While there aren’t many HCI certifications yet, there are many certification tracks you can obtain to make you an HCI expert.

 

Storage Certifications

There are many great storage options for certification depending on which vendor your organization uses for storage. If you’re already proficient with storage, you’re a step ahead. The storage vendor isn’t nearly as important as the storage technologies and concepts. Storage networking is important and getting trained on its concepts will be helpful in your quest to become an HCI expert.

 

Networking Certifications

There aren’t many certifications more important than networking. I strongly believe everyone in IT should have at least an entry-level networking certification. Networking is the lifeblood of the data center. Storage networking also exists in the data center and gaining a certification or training in networking will help to build your expert status.

 

Virtualization Certifications

Building virtual machines has become a daily occurrence, and if you’re in IT, it’s become necessary to understand virtualization technology. Regardless of the virtualization vendor of choice, having a solid foundational knowledge of virtualization will be key in becoming an HCI expert. Most HCI solutions use a specific vendor for their virtualization piece of the puzzle, but some HCI vendors have proprietary hypervisors built in to their products. Find a virtualization vendor with a good certification and training roadmap to gain knowledge of ins and outs of virtualization. When it comes to HCI, you’ll need it.

 

HCI Training

If you already have a good understanding of all the technologies listed above, you might be better suited to taking a training class or going after an HCI-specific certification. Most HCI vendors offer training on their platforms to bring you and your team up to speed and help build a foundational knowledge base for you. Classes are offered through various authorized training centers worldwide. Some vendors offer HCI certifications—while it’s currently a very small amount, I believe this will change over time. Do a web search for HCI training and see what returns. There are many options to choose from depending on your level of HCI experience thus far.

 

Hands-on Experience

I saved the best for last, as you can’t get better training than on-the-job training. Learning as you go is the best route to becoming an HCI expert. Granted, certifications help validate your experience, and training helps you dive deeper, but hands-on experience is second-to-none. Making mistakes, learning from your mistakes, and documenting everything you do is the fastest way to becoming an expert in any field in my opinion. Unfortunately, not everyone can learn on the job, as most organizations cannot afford to have a production system go down, or have admins making changes on the fly without a prior change board approval. In this case, find an opportunity to build a sandbox or use an existing one to build things and tear them down, break things, and fix things. Doing this will help you become the HCI expert your organization desperately needs.

If you attended a major conference or read any industry press over the last handful of years, you’d be excused for thinking everyone would be running on virtual desktop infrastructure (VDI) by now. We’ve been hearing “It’s the year of VDI!” for years. Yet, for a reasonably mature technology, VDI has never had the expected widespread, mainstream impact. When the concept first started getting attention, as with many technologies, everyone latched onto the big value prop that VDI would provide a great cost savings. However, it’s not quite as simple. Total cost of ownership (TCO) is often touted as the key metric with any technology, so today I’d like to explore some of the less apparent costs which go into VDI you should consider when evaluating if it’s the year of VDI for your organization.

 

Software Costs

You’d think this would be the easy part of the equation. OK, so you’ve got your hypervisor. Chances are, you’re looking at the same VDI vendor you use for traditional VMs, but the licensing model for VDI differs from your traditional hypervisor. Where a traditional hypervisor will typically be sold on a per-processor basis, VDI licensing is usually sold on a device or per-user basis. There’s even bifurcation within the per-user basis—you may see licenses sold on a named user (Mary and Stan get licenses, but Joe’s been a bad boy, so no license for him) or a concurrent-user basis (I get “N” licenses and that’s how many people can connect at a given time.) Whichever path you follow should be directed by your use case. If you’re looking to solve for shift workers, follow the sun, or similar, you may want to consider concurrent users, so multiple people can take advantage of a single license. If you’ve determined you have specialized workers with specific needs (think power users) then a named license model might make sense. As it pertains to our cost discussion, the concurrent user licensing can apply to multiple people, and hence has a higher dollar value associated with it, whereas the named-user license models may have a smaller spend associated, but come at the cost of reducing flexibility.

 

That was the “easy” part of software element of the equation, but there are several other software considerations we need to consider to roll up in the TCO of your VDI proposal.

 

Monitoring

At a high level, when you think about all the various layers to a VDI solution, you need insight into the servers running the platform as well as their underlying infrastructure, the network carrying your VDI data, the hardware your users leverage to access their VDI desktop, and performance within the desktop OS itself. Does your existing monitoring platform have the capabilities to monitor all these elements? If yes, great! You just need to account for some portion of that in your cost calculations. If no, there’s a lot of homework in front of you, and at the end you’re going to need an additional purchase order to get the monitoring platform.

 

Application & Desktop Delivery

This big topic could be its own post, but how are you going to deliver desktops to your users and how are the applications going to be delivered within the desktop? Are you going to leverage the VDI vendor’s capabilities to deliver applications? Are the apps going to be virtualized? Some of these options come with higher-level licensing from your VDI platform provider, but if you go with a lower-tier VDI license, you might want a third-party delivery mechanism. Or you could do it manually, but we’ll come back to that in a minute.

 

Backups

At some level you’re just backing up a bunch of VMs, but does your current solution meet the unique needs of a VDI environment? If you’re deploying any persistent desktops at all, the backup design will look very different from the minimal needs a non-persistent design presents. Don’t forget delivery of your VDI solution will likely encompass a number of servers that should be considered for protection.

 

One last word on software costs: A C-level exec once said to me, “We don’t need to buy operating system licenses; we’re virtualized.” It doesn’t quite work this way. Take the time to understand the licensing agreements for your desktop OS. I promise you’ll want to proactively learn what they say before the vendor’s auditors come knocking.

 

Hardware Costs

The most obvious hardware cost is how you connect to your VDI environment. If you’re a BYOD shop, the job is done—just provide your users the agent they need. Typically though, you’re going to be evaluating zero clients against thin clients. Zero clients are essentially dumb terminals with little configurability and little flexibility, but it’s probably your cheapest option to purchase. Thin clients can cost the equivalent of a desktop PC, but you get a lot more horsepower, as they typically have better chipsets, memory, and graphics. Thin clients will usually support more protocols if you leverage multiple solutions as well. Know your users and understand their workloads to help you decide on client direction.

 

In my experience, storage plays a very important role in the success of the VDI project. Do you plan on leveraging persistent or non-persistent desktops? The answer will drive whether you need additional storage capacity to support persistent desktops or not. Have you ever experienced a boot storm? If you have, then you know your storage components can create a bottleneck affecting your user experience. Take the time to evaluate your IOPS needs and whether all the components of your storage sub-system can support everyone in the organization logging on at 9 a.m. on a Tuesday following a long holiday weekend. Failure to do so could result in an unexpected and potentially expensive “opportunity” for a new storage project.

 

Opportunities and Opportunity Cost

What you give up and gain from a VDI solution is probably going to be the hardest part to quantify but should be one of the larger drivers of the initiative. Troubleshooting a Windows desktop, for example, is a relatively straightforward process. What if the Windows machine is a VDI desktop? Once you’ve converted to a virtual desktop infrastructure, you now have to troubleshoot the OS, the connecting hardware, VDI protocol, network, hypervisor, storage, and so on. Does your organization have the appetite for the time and resource commitments to retool your team to handle this new paradigm? Conversely, anyone who’s had to patch hundreds or thousands of desktops (and deal with the fallout) will probably appreciate the simplicity of patching a single golden image.

 

How security-conscious is your organization? If data loss prevention is a big concern for you, then all the other costs may fall by the wayside, as a VDI solution provides a lot of security measures to better protect your organization right out of the box. What about offering seamless upgrades to your users? How much value would you place on that user experience? I know we find it highly valuable both from an effort and a goodwill perspective.

 

A lot of hidden costs and considerations can trip up your VDI initiative. While it’s hard to cover everything, hopefully this piece helps illuminate some dark corners.

A security policy based on actual use cases has been documented, as have the components of the ecosystem. Before devising a practical implementation and configuration plan, one more assessment should be done involving the application of best practices and compliance mandates.

 

Best practices are informative rule sets to provide guidelines for acceptable use, resource optimization, and well-known security protections. These rules may be derived from those commonly accepted in the information technology industry, from vendor recommendations and advisories, legislation, and specific business mandates.

 

In the more formal sense, best practices are outlined in frameworks built from industry standards. These standards define system parameters and processes as well as the concepts and designs required to implement and maintain them. Standards-based best practices can be used as guidelines, but for many entities, their application is mandatory.

 

Standards-Based Frameworks

Well-known open standards applicable to IT governance, security controls, and compliance are:

 

ISO/IEC 27000 (Replicated in various country-specific equivalents)

The Code of Practice for Information Security Management addresses control objectives and focuses on acceptable standards for implementing the security of information systems in the areas of:

  • Asset management
  • Human resources security
  • Physical security
  • Compliance
  • Access control
  • IT acquisition
  • Incident management

 

The 27000 framework is outlined in two documents:

  • 27001 – the certification standard for measuring, monitoring, and security management control of Information Security Management Systems

  • 27002 – security controls, measures, and code of practice for implementations and the methodologies required to attain the certification defined in 27001

 

COBIT

Control Objectives for Information and Related Technology is a recognized framework for IT controls and security. It provides guidance to the IT audit community in the areas of risk mitigation and avoidance. It’s more focused on system processes than the security of those systems through control objectives defined in four domains: Planning and Organization, Acquisition and Implementation, Delivery and Support, and Monitoring and Evaluation.

 

PCI-DSS

The Payment Card Industry Security Standards Council (PCI SSC) defines Payment Card Industry (PCI) security standards with a focus on improving payment account security throughout the transaction process. The PCI DSS is administered and managed by the PCI SSC, an independent body created by the major payment card brands (Visa, MasterCard, American Express, Discover, and JCB). These payment brands and acquirers are responsible for enforcing compliance and may fine an acquiring bank for compliance violations.

 

Compliance-Mandated Frameworks

While also based on best practices, these frameworks focus on industry specific security controls and risk management. Compliance is mandatory and monitored by formal audits conducted by regulatory bodies to ensure certification is maintained in accordance with the industry and any legislation defined in a governing act. Failure to satisfy the criteria can leave an entity open to legal ramifications, such as fines and even jail time. Often standards-driven best practices documents, such as ISO 27002, are the foundation for application of requirements defined in each act.

 

Three of the most common regulatory and legislative acts are:

 

GLBA (Gramm-Leach-Bliley Act)

Primarily used by the U.S. financial sector and covers organizations engaging in financial activity or classified as financial institutions that must establish administrative, technical, and physical safeguard mechanisms to protect information. The act also mandates requirements for identifying and assessing security risks, planning and implementing security solutions to protect sensitive information, and establishing measures to monitor and manage security systems.

 

HIPAA (Health Insurance Portability and Accountability Act)

Applies to organizations in the health care sector in the U.S. Institutions must implement security standards to protect confidential data storage and transmission, including patient records and medical claims.

 

SOX (Sarbanes-Oxley Act)

Also known as the U.S. Public Company Accounting Reform and Investor Protection Act, it holds corporate executives of publicly listed companies accountable in the area of establishing, evaluating, and monitoring the effectiveness of internal controls over financial reporting. The act consists of 11 titles outlining the roles and processes required to satisfy the act, as well as reporting, accountability, and disclosure mandates. Although the titles don’t address security requirements specifically, title areas calling for audit, data confidentiality and integrity, and role-specific data access will require the implementation of a security framework such as ISO 27000 and/or COBIT.

 

Even if an organization doesn’t need to satisfy a formal mandate, understanding the content of well-defined security frameworks can ensure no critical data handling processes and policies are missed. If a formal framework is required, it will influence the tools and best practices methods used for policy implementation as well as monitoring and reporting requirements. These topics will be covered in the final two installments of this blog series.

 

The weather has turned cool as Autumn approaches, and everyone here is in back-to-school mode. In past years, September has been filled with events for me to attend. But this year there are none, giving me more time to enjoy sitting by the fire.

 

As always, here are some links I hope you find interesting. Enjoy!

 

More Than Half of U.S. Adults Trust Law Enforcement to Use Facial Recognition Responsibly

The results of this survey by Pew helps show people have no idea what civil liberties mean. You can't say it's acceptable for law enforcement to use this tech to track criminals but not acceptable to track your activities. That's not how this works.

 

Michigan bans flavored e-cigarettes to curb youth vaping epidemic

Finally, someone stepping forward to take action. My town wouldn't allow an adult bookstore because of the problems it *could* cause, but we have three vaping shops within walking distance of the high school.

 

Facebook Dating has launched in the United States

What's the worst that can happen?

 

Hundreds of millions of Facebook users’ phone numbers found lying around on the internet

As I was just saying, it is clear just how much Facebook values your security and privacy.

 

Ranking Cities By Salaries and Cost of Living

I've never been to Brownsville, but apparently that's where everyone would want to earn a paycheck.

 

Japanese Clerk Allegedly Stole Over 1,300 Credit Cards By Instantly Memorizing All the Numbers

I'm not even mad, I'm impressed.

 

Artificial Intelligence Will Make Your Job Even Harder

Interesting take on the dangers of automating away those boring tasks in your daily life.

 

Was exploring a new bike path and found this view as a result. It's always fun to discover new things not far from home.

Even after all these years in technology, I remain in awe of IT pros. Watching my kids’ classes, it seems everyone—including elementary school students and other civilians—is practicing truly geeky, hands-on-keyboards arts. We’re also seeing more casual administrators—people with a keen interest in spending some time managing networks and applications on the way to another role. But IT pros are different.

 

Like teachers, firefighters, and healthcare workers, IT pros tend to go where help is most needed. They endure simultaneously cacophonous and freezing server rooms, suffer the indignities of cost center budget processes, juggle multiple business teams with competing priorities, and regularly work nights, weekends, and holidays, all while presenting calm to assure end users. IT pros don’t work jobs. They’re called to be helpful.

 

And today they’re doing something they haven’t done in a decade in response to external forces. They’re jumping (mostly) without a net to fail fast, while learning new (and in some cases immature) solutions like hybrid, cloud-native, data science, automation, and more. They’re also accepting the push toward service-based licensing, even with the added specter of a career-limiting OpEx bill only a click—or API call—away.

 

And none of this would be possible, especially including the major changes business now demands in the pursuit of transformation, without IT professionals. These projects demand conviction, endurance, and creative thinking about how they’ll be maintained years from now. They drive new needs to engage business leaders and ask tough or politically unpopular questions as they modernize legacy apps. And these projects are cornucopias of unknowns and new risks only considerable experience and skill can mitigate.

 

That’s why I remain in awe of IT pros. And on IT Pro Day 2019, it’s important we recognize the people in tech who make the world work. Here’s to the dedicated men and women who’ve charted their careers by solving problems, enabling the business, and are always there for us whenever we need help.

 

Perhaps it’s IT pros who are the original five nines.

Recently, I was building out a demonstration and realized I didn’t have the setup I needed. After a little digging, I realized I wanted to show how to track changes to containers. This meant I needed some containers I could change, which meant installing Docker.

 

If this sounds like the usual yak shaving we IT professionals go through in our daily lives, you’d be right. And even if I told you I’d never spun up my own containers—or installed Docker, for that matter—you’d probably still say, “Yup, sounds like most days ending in ‘y.’”

 

Because working in IT means figuring it out.

 

I’d like to tell you Docker installed flawlessly; I was able to scan the documentation and a couple of online tutorials and get my containers running in a snap; I easily made changes to those containers and showcased the intuitive nature of my Docker monitoring demo.

 

I’d like to say all of those things, but if I did, you—my fellow IT pros—would know I was lying. Because figuring it out is sometimes kind of a slog. Figuring it out is more often a journey from a series of “Well that didn’t work” moments to “Oh, so this is how it’s done?” Or, as I like to tell my non-techie friends and relatives, “Working in IT is having long stretches of soul-crushing frustration, punctuated by brief moments of irrational euphoria, after which we return to the next stretch of soul-crushing frustration.”

 

That’s not to say we who make our career in IT don’t get lucky from time to time. But, as Coleman Cox once said, “I am a great believer in Luck. The harder I work, the more of it I seem to have.”

 

As we work through each day, solving problems, shaving yaks, and generally figuring it out, we amass to ourselves a range of experiences which—while they may be a bit of a slog at the time—increase not only our knowledge of how this thing (the one we’re dealing with right now) works, but also of how things work in general.

 

While it’s less relevant now, back in the day I used to talk about the number of word processors I knew—everything from WordStar to WordPerfect to Word—close to a dozen if you counted DOS and Windows versions separately. At the time, this was a big deal, and people asked how I could keep them straight. The answer was less about memory and more about familiarity born of experience. I likened it to learning card games.

 

“When you learn your first card game,” I’d point out, “it’s completely new. You have nothing to compare it to. So, you learn the rules and you play it. The second game is the hardest because it completely contradicts what you thought you knew about ‘card games’ (since you only knew one). But then you learn a third, and a fourth, and you start to get a sense of how card games in general work. There’s nothing intrinsically special about an ace or a jack or whatever, and card games can work in a variety of ways.”

 

Then I’d pull it back around to word processors: “After learning the third program, you realize there’s nothing about spell check or print or word-wrap unique to MultiMate or Ami Pro. And once you have a range of experience, you’re able to see how WordPerfect’s ‘Reveal Codes’ was totally unique.”

 

Which makes a nice story. But there’s more to it than that. As my fellow Head Geek Patrick Hubbard pointed out recently, those of us who mastered WordPerfect discovered learning HTML was pure simplicity, specifically because of the “reveal codes” functionality I mentioned earlier.

Image: https://2.bp.blogspot.com/-3B6KHm5x3JQ/WrrSvt1pIAI/AAAAAAAABvw/wQLhAE28Aak8AkI13Ylg0M8iJmZofgV5ACLcBGAs/s400/2-revealcodes.png

 

Anyone who knows HTML should feel right at home with the view on the bottom half of the screen.

 

Having taken the time to slog through WordPerfect (which was, in fact, the second word processor I learned), I not only gained skills and experience in using the software, but I unknowingly set myself up to have an easier time later.

 

And this experience was by no means unique—meaning I personally experienced many times when a piece of knowledge I’d struggled to acquire in one context turned out to be both relevant and incredibly useful in another; and my experience in this regard is not unique to IT professionals. We all have them. The experiences we have today all feed into the luck we have tomorrow.

 

So, on this IT Pro Day, I want to salute everyone in our industry who shows up, ready to do the hard work of figuring it out. May the yaks you must shave be small, and the times you find yourself saying “Wait, I already know this!” be many.

sqlrockstar

IT Pro Day Turns Five

Posted by sqlrockstar Employee Sep 10, 2019

Today is the 5th annual IT Pro Day, a day created by SolarWinds to recognize the IT pros who keep businesses up and running each and every day, all year long. Five years makes for a nice milestone, but in IT time it’s not. Many of you IT pros reading this likely support systems two or three times as old. 

 

As an IT pro myself, I know no one ever stops by your desk to say “thanks” for everything working as expected. That’s not the way the world works. I’ve never called to thank my cable company, for example. No, people only contact IT pros for one of two reasons: either something is broken, or something might become broken. And if it’s not something you know how to fix, you’ll still be expected to fix it, and fast.

 

And thanks to the ever-connected world in which we live, IT pros are responding to calls for help at all hours of the day. Not just work calls either. Family and friends reach out to ask for help with various hardware and software requests. Just a few weeks ago I had to show a friend who was struggling with some data how to create a PivotTable in Excel while sitting around a fire.

 

IT pros don’t do it for the money. We do it because it sparks joy to help others. Sure, the money helps bring home the bacon, but that’s not our end goal. We want what anyone wants: happy customers. And we make customers happy because we respond to alerts when called, we reduce risk by automating away repetitive tasks, and we fix things fast (and those fixes last, sometimes for years).

 

Today is the day to say THANK YOU to the IT pro, and even give a #datahug to the ones who had enough time to shower before heading to the office.

 

Cheers!

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp with suggestions on delivering a great user experience to staff and constituents. I’d say security is more important than UX for many of our government customers, but there are ways of making improvements more securely.

 

While much has been done about the need to modernize federal IT networks, little has been written about the importance of optimizing the user experience for government employees. Yet modernization and the delivery of a good user experience go hand-in-hand.

 

Here are three strategies you can employ to ensure a seamless, satisfactory, and well-optimized experience while disposing of headaches and enhancing productivity.

 

Understand Your Users

 

What applications do they need to do their jobs?

  • What tools are they using to access those applications?
  • Are they using their own devices in addition to agency-issued technology?
  • Where are they located?
  • Do they often work remotely?

 

Let’s discuss users who routinely use their personal smartphones to access internal applications. You’ll have to consider whether to authorize their devices to work on your internal infrastructure or introduce an employee personal device network and segment it from the main network. This can protect the primary network without restricting the employee from being able to use said device or application.

 

Similar considerations apply to federal employees who routinely work remotely. If this applies, you’ll want to employ segmentation to ensure they can access the applications they need without potentially disrupting or compromising your primary network.

 

Monitor Their Experience

 

Synthetic and real user monitoring can help you assess the user experience. Synthetic monitoring allows you to test an experience over time and discover how specific instances affected it. Real user monitoring lets you analyze real transactions as users interact with your agency’s applications.

 

Both of these monitoring strategies are useful on their own, but they really shine when layered on top of one another. A synthetic test may show everything running normally, but if users are experiencing poor quality of service, something is clearly amiss. Perhaps you need to allocate more memory or optimize a database query. Comparing the real user monitoring data with the synthetic test can give you a complete picture and help you identify the problem.

 

However, applications are only the tip of the spear. You also need to be able to see how those applications are interacting with your servers, so you can be proactive in addressing issues before they arise.

 

Obtaining this insight requires going a step beyond traditional systems monitoring. It calls for a level of server and application monitoring that allows you to visualize the relationship between the application and the server. Without understanding those interdependencies, you’ll be throwing darts in the dark whenever an issue arises.

 

Pre-Optimize the Experience

 

It’s also important to provide exceptional citizen experiences, particularly for agencies with applications that must endure periods of heavy user traffic.

 

These agencies can plan for periods of heavy usage by simulating loads and their impact on applications. Synthetic monitoring tools and network bandwidth analyzers can be instrumental in simulating and plotting out worst-case scenarios to see how the network will react.

 

If you’re in one of these agencies, and you know there’s the potential for heavy traffic, take a few weeks, or even months, in advance to run some tests. This will allow you to proactively address the challenges ahead—such as purchasing more memory or procuring additional bandwidth through a service provider—and “pre-optimize” the user experience.

 

All the investments agencies make towards network modernization are fruitless without a good user experience. If someone can’t access an application, or the network is horrendously slow, the investments won’t be worth much. Committing to providing great service can help users make the most of your agency’s applications and create a more efficient and effective user experience.

 

Find the full article on Federal Times.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Sascha Giese

IT Pros and Pirates

Posted by Sascha Giese Employee Sep 10, 2019

Ah yes, it’s IT Pro Day again – very soon!

 

Each year, on the third Tuesday of September, we celebrate the unsung heroes of IT. Well, someone did sing, but that’s a different story.

IT Pro Day is certainly more important than Talk Like a Pirate Day, celebrated just two days later.
Long John Silver would probably disagree, but back in his day, there was no IT on ships, only curved swords.

 

I can’t think of another job where learning and growing are so essential but rewarding and, at the same time, so straightforward.
Depending on what niche you work in, you can develop your learning path, study, sit an exam, and broaden your horizon while opening new opportunities at work as soon as you start sharing your knowledge.
You’re in full control, and that’s one of the best things ever!


Plus, the convenience of working with at least two screens means there’s always room for a browser game.

Just in case you need to work overtime, while you’re ensuring the background scripts execute correctly. They could never do this without your supervision!


Another perk of working in IT is that you can decorate your desk with pictures of Wookiees and elves and unicorns and sharks with lasers and no one would ever judge.

They wouldn’t understand anyway.

 

Still, some days drag, like in many other jobs, too.

For example, there might be that person working in accounting who frequently causes trouble with, let’s say, Bitlocker.
As you fixed it for the third time, you wished you could go back to the days of BOFH in the 90s. But unfortunately, that’s not possible.
Close your eyes for a second. Think of pirates.


On that note, do you know the worst thing you could do as an IT pro? Showing up at a random party, having a glass of whatever, and someone asks you, “So, what do you do?”
Whatever you do, DO NOT respond with, “I work with computers.”

I’ve been there. It won’t end well.

Instead, I suggest saying, “I’m a pirate, arrr,” and focus on your drink.

 

Happy IT Pro Day everyone.

Running a traditional data center infrastructure for years can put your company in a rut, especially when it’s time to pick a new solution. When electing to trade out your traditional infrastructure for a sleek new hyperconverged infrastructure (HCI), it can be a difficult paradigm to shift. So many questions can arise in selecting, and many HCI vendors are willing to answer those questions, which doesn’t necessarily make it easier. When deciding to switch to an HCI solution, it’s important to take stock of your current situation and assess why you’re searching for a new solution. Here are some things to think about when choosing an HCI solution.

 

Do You Have Experienced Staff?

Having staff on-hand to manage an HCI solution should be the main concern when choosing a solution. Traditional server infrastructures generally rely on several siloed teams to manage different technologies. When there are storage, networking, server, and security personnel, it’s important to decide if an all-in-one HCI solution is a possibility. Is there enough time to get your team spun up on the latest HCI solution and all the nuances it brings? Take a good look at your staff and take stock of their skillset and level of experience before diving headfirst into a brand-new HCI solution.

 

Support, Support, Support

Support is only considered expensive until it isn’t. When your new HCI solution isn’t working as planned or your team is having trouble configuring something, a support call can come in very handy. If the HCI solution you’re looking into doesn’t have the level of support to meet your needs, forget about it. It does no good to pay for support you can’t rely on when it all comes crashing down, which it can from time to time. Ensure the vendor’s support provides coverage for both hardware and software and offers support coverage hours suited to your needs. If you’re a government entity, does the vendor provide a U.S.-citizen-only support team? These are all important questions to ask of prospective vendors.

 

How Will the HCI Solution Be Used?

First things first, how will you be using the HCI solution? If your plan is to employ a new HCI solution to host your organization’s new VDI implementation, specific questions need to be addressed. What are the configuration maximums for CPU and memory, and how much flash memory can be configured? VDI is a very resource-intensive project, and going into the deployment without the right amount of resources in the new HCI solution can put your organization in a bad spot. If the idea of HCI procurement is coming specifically from an SMB/ROBO situation, it’s extremely important to get the sizing right and ensure the process of scaling out is simple and fast.

 

Don't Get Pressured Into Choosing

Your decision needs to come when your organization is ready, not a vendor’s schedule or pressure to commit. Purchasing a new HCI solution is not a small decision and it can come with some sticker shock, so it’s important to choose wisely and choose what’s right for your organization. Take stock of the items I listed above and decide how to proceed with all the vendor calls, which will flood your phones once you decide you’re looking for a new HCI solution.

In this third post on the subject, I’ll discuss hyperconverged architectural approaches. I want to share some of my experiences in how and why my customers have taken different choices, not to promote one solution over another, but to highlight why one approach may be better for your needs than another.

 

As I’ve alluded to in the past, there are two functional models to hyperconverged approach. The first is the appliance model, a single prebuilt design with a six or eight rack-unit box, comprised of X86 servers, each with their own storage, shared across the entire device housing an entire virtual infrastructure. In the other model, devices are dedicated to each purpose (both storage and compute), where one or the other is added as needed.

 

The differing approaches make for key financial elements in how to scale your architecture. Neither is wrong, but either could be right for the needs of the environment.

 

Scalability should be the first concern, and manageability should be the second when making these decisions. I’ll discuss both those issues, and how the choice of one over the other may affect how you build your infrastructure. As an independent reseller with no bias toward any vendor’s approach, these are the first questions I outline to my customers during the HCI conversation. I also get the opportunity to see how different approaches work in implementation and afterwards in day-to-day operations. Experience should be a very important element of the decision-making process. How do these work in the real world? After some time with the approach, is the customer satisfied they’ve made the correct decision? I hope to outline some of these experiences, give insight to the potential customer’s perspective, highlight some "gotchas," and aid in the decision-making process.

 

A word about management: the ability to manage all your nodes, ideally through vCenter or some other centralized platform through one interface, is table stakes. The ability to clone, replicate, and backup from one location to another is important. The capacity to deduplicate your data is not part and parcel of every architecture, but even more important is the ability to do so without degrading performance. Again, this is important for the decision-making process.

 

Scalability is handled very differently depending on your architecture. For example, if you run out of storage on your cluster, how do you expand it? In some cases, a full second cluster may be required. However, there are models in which you can add storage-related nodes without having to increase the processor capacity. Same holds true for the other side. If you have ample storage but your system is running slowly, you may need to add another cluster to expand. In those cases, the node-by-node expansion capacity can be increased by adding compute nodes. This may or may not be relevant to your environment, but if it is, this should be considered as an ROI part of the scalability you may require for your environment. I believe it’s a valuable consideration.

 

In my role as an architect for enterprise customers, I don’t like conversations in which the quote for the equipment is the only concern. I much prefer to ask probing questions along the lines I’ve discussed above to help the customer come to terms with their more important needs and make the recommendation based on those needs. Of course, the cost of the environment is valuable to the customer. However, when doing ROI valuation, one must account for the way the environment may be used over the course of time and the lifecycle of the environment.

 

In my next post, I’ll discuss a bit more about the storage considerations inherent to varying approaches. Data reduction, replication, and other architectural approaches must be considered. But how? Talk to you in a couple weeks, and of course, please feel free to give me your feedback.

Had a great time at VMworld last week. I enjoyed speaking with everyone who stopped by the booth. My next event is THWACKcamp™! I've got a few more segments to film and then the show can begin. I hope to "see" all of you there in October.

 

As always, here are some links I hope you find interesting. Enjoy!

 

VMware Embarks on Its Crown Jewel’s Biggest Rearchitecture in a Decade

Some of the news from VMworld last week. Along with their support for most public clouds (sorry Oracle!), VMware is pivoting in an effort to stay relevant for the next five to eight years.

 

Google says hackers have put ‘monitoring implants’ in iPhones for years

The next time the hipster at the Genius Bar tries to tell me Apple cares about security, I'm going to slap him.

 

Amazon's doorbell camera Ring is working with police – and controlling what they say

This really does have "private surveillance state" written all over it.

 

Volocopter’s air taxi performs a test flight at Helsinki Airport

At first I thought this said velociraptor air taxi and now I want one of those, too.

 

Fraudsters deepfake CEO's voice to trick manager into transferring $243,000

Interesting attack vector with use of deepfake tech. Use this to raise awareness for similar scams, and consider updating company policies regarding money transfers.

 

About the Twitter CEO '@jack hack'

Good summary of what happened, and how to protect yourself from similar attacks not just on Twitter, but any platform that works in a similar manner.

 

Employees connect nuclear plant to the internet so they can mine cryptocurrency

What's the worst that could happen?

 

From the VMworldFest last week, a nice reminder that your documentation should be kept as simple and concise as possible:

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Mav Turner with tips on monitoring and troubleshooting distributed cloud networks in government agencies. I have come to expect a bit of skepticism from government customers about cloud adoption, but I’m seeing more evidence of it daily.

 

The Office of Management and Budget’s Cloud Smart proposal signals both the end of an era and the beginning of new opportunities. The focus has shifted from ramping up cloud technologies to maximizing cloud deployments to achieve the desired mission outcomes.

 

Certainly, agencies are investing heavily in these deployments. Bloomberg Government estimates federal cloud spending will reach $6.5 billion in fiscal year 2018, a 32% increase over last year. However, all the investment and potential could be for naught if agencies don’t take a few necessary steps toward monitoring and troubleshooting distributed cloud networks.

 

1. Match the monitoring to the cloud. Different agencies use a variety of cloud deployments: on-premises, off-premises, and hybrid. Monitoring strategies should match the type of infrastructure in place. A hybrid IT infrastructure, for example, will require monitoring that allows administrators to visualize applications and data housed both in the cloud and on-premises.

 

2. Gain visibility into the entire network. It can be difficult for administrators to accurately visualize what’s happening within complex cloud-based networks. It can be tough to see what’s happening when data is being managed outside of the organization.

 

Administrators must be able to visualize the entire network, so they can accurately pinpoint the root cause of problems. Are they occurring within the network or the system?

 

3. Reduce mean time to resolution. Data visualization and aggregation can be useful in minimizing downtime when a problem arises, especially if relevant data is correlated. This is much better spending the time to go to three different teams to solicit the same information, which may or may not be readily available.

 

4. Monitor usage and automate resource lifecycle to control costs. Agencies should carefully monitor their cloud consumption to avoid unnecessary charges their providers may impose. They should also be aware of costs and monitor usage of services like application programming interface access. Often, this is free—up to a point. Being aware of the cost model will help admins guide deployment decisions. For example, if the cost of API access is a concern, administrators may also consider using agent-based monitoring, which can deliver pertinent information without having to resort to costly API calls.

 

The other key to keeping costs down in a government cloud environment is ensuring a tight resource lifecycle for cloud assets. Often, this will require automation and processes to prevent resources from existing beyond where they’re needed. Just because admins think they're no longer using a service doesn’t mean it doesn’t still exist, running up charges and posing a security risk. Tight control of cloud assets and automated lifecycle policies will help keep costs down and minimize an agency's attack surface.

 

5. Ensure an optimal end-user experience. Proactively monitoring end-user experiences can provide real value and help ensure the network is performing as expected. Periodically testing and simulating the end-user experience allows administrators to look for trends signaling the cause of network problems (periods of excessive bandwidth usage, for example).

 

6. Scale monitoring appropriately. Although many government projects are limited in scope, agencies may still find they need to scale their cloud services up or down at given points based on user demand. Monitoring must be commensurate with the scalability of the cloud deployment to ensure administrators always have a clear picture of what’s going on within their networks.

 

Successfully realizing Cloud Smart’s vision of a more modern IT infrastructure based on distributed cloud networks will require more than just choosing the right cloud providers or type of cloud deployments. Agencies must complement their investments with solutions and strategies to make the most of those investments. Adopting a comprehensive monitoring approach encompassing the entire cloud infrastructure is the smart move.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Obtaining a VMware certification requires effort, dedication, and a willingness to learn independently with a commitment to the subject matter.

 

To fulfill the requirements made by VMware, you must attend a training class before taking the exam. The cost for each of these courses provided by VMware is extremely high, but I found an alternative to fulfill this requirement at a very affordable rate. In my case, I decided to use my personal funds to meet the training requirements made by VMware to pursue the certification. Without question, cost was the deciding factor and thanks to Andy Nash, I discovered Stanly Community College. The course is taught completely online and covers the installation, configuration, and management of VMware vSphere. It also meets all the training requirements made by VMware. If you’re interested in pursuing training provided by VMware, please review the vSphere: ICM v6.5 course to help provide additional information about the course and cost compared to Stanly Community College.

 

I highly recommend reading the certification overview provided by VMware before moving forward with your decision. Each situation is unique, and the information provided will serve as an asset when determining which training option to pursue, including the certification you have in mind. For example, the prerequisites for the VCP-DCV6 exam can be found here. Additionally, the requirements vary for first-time exam takers or if you don’t currently hold a VCP certification. As of February 4, 2019, VMware has removed the mandatory two-year recertification requirements that allows you to upgrade and recertify.

 

There are multiple training options in addition to the choices I listed above, and, in some cases, they’re free of cost and are available to you anytime, anywhere. The formats include hands-on labs, vBrownBag, blogs, and various podcasts.

 

Hands-on labs are a fantastic resource because they permit you to test and evaluate VMware products in a virtual environment without needing to install anything locally. VMware products include Data Center Virtualization, Networking and Security, Storage and Availability, and Desktop and Application Virtualization, to name a few. Additionally, this provides you with the opportunity to confirm which product you’re interested in pursuing for the respective certification for without making a financial commitment for the required training course.

 

vBrownBag is all about the community! Its purpose is to help one another succeed through contributions made by fellow community members. In the end, it’s all about #TEAM.

 

Blogs include the following contributors: Melissa Palmer, Daniel Paluszek, Gregg Robertson, Lino Telera, Chestin Hay, Cody Bunch, and many others.

 

Podcasts include the following contributors: Simon Long, Pete Flecha and John Nicholson, VMware Communities Roundtable, and many more.

 

Let’s examine the pros and cons of pursing a certification.

 

Pros

Cons

Used to measure a candidate’s willingness to work hard and meet goals

Cost (out of pocket vs. employer)

IT certifications are required for certain job openings (may assist you with obtaining a desired position as an applicant or current employee)

IT certifications are required for certain job openings (may prevent you from obtaining a desired position as an applicant or current employee)

Certifications are used to confirm subject-matter expertise

Time (balancing certification training with a full-time job or family responsibilities)

Companies save on training costs if they hire a certified candidate

Certifications may not be considered valuable if you don’t have the experience to back them up

Certifications increase your chances of receiving a promotion or raise

Test vs. Business needs/situations may not be aligned

Certifications ensure you’re up to date on the latest best practices

 

 

As you can see, multiple resources are available to help you succeed in pursuit of your certification, including the wonderful contributors in the #vCommunity.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.