1 2 3 Previous Next

Geek Speak

2,837 posts

This blog series has been all about taking a big step back and reviewing your ecosystem. What do you need to achieve? What are the organization’s goals and mandates? What assets are in play? Are best practices and industry recommendations in place? Am I making the best use of existing infrastructure? The more questions asked and answered, the more likely you’re to build something securable without ignoring business needs and compromising usability. You also created a baseline to define a normal working environment.

 

There’s no such thing as a 100% protected network. Threats evolve daily. If you can’t block every attack, the next best thing is detecting when something abnormal is occurring. Anomaly detection requires the deployment of methodologies beyond the capabilities of the event logs generated by the elements on the network. Collecting information about network events has long been essential to providing a record of activities related to accounting, billing, compliance, SLAs, forensics, and other requirements. Vendors have provided data in standardized forms such as Syslog, as well as device specific formats. These outputs are then analyzed to provide a starting point for business-related planning, security breach identification and remediation, and many other outcomes.

 

In this blog, I’ll review different analysis methods you can use to detect threats and performance issues based on the collection of event log data from any or all systems in the ecosystem.

 

Passive/deterministic traffic analysis: Based on rule and signature-based detection, passive traffic analysis continually monitors traffic for protocol anomalies, known threats, and known behaviors. Examples include tunneled protocols such as IRC commands within ICMP traffic, use of non-standard ports and protocol field values, and inspecting application-layer traffic to observe unique application attributes and behaviors to identify operating system platforms and their potential vulnerabilities.

 

Correlating threat information from intrusion prevention systems and firewalls with actual user identities from identity management systems allows security professionals to identify breaches of policy and fraudulent activity more accurately within the internal network.

 

Traffic flow patterns and behavioral analysis: Capture and analysis using techniques based on flow data. Although some formats of flow data are specific to one vendor or another, most include traffic attributes with information about what systems are communicating, where the communications are coming from and headed to, and in what direction the traffic is moving. Although full-packet inspection devices are a critical part of the security infrastructure, they’re not designed to monitor all traffic between all hosts communicating within the network interior. Behavior-based analysis, as provided by flow analysis systems, is particularly useful for detecting traffic patterns associated with malware.

 

Flow analysis is also useful for specialized devices like multifunction printers, point-of-sale (POS) terminals, automated teller machines (ATMs), and other Internet of Things (IoT) devices. These systems rarely accommodate endpoint security agents, so techniques are needed to compare actions to predictable patterns of communication. Encrypted communications are yet another application for flow and behavioral analysis. Increasingly, command-and-control traffic between a malicious server and a compromised endpoint is encrypted to avoid detection. Behavioral analysis can be used for detecting threats based on the characteristics of communications, not the contents. For example, an internal host is baselined as usually communicating only with internal servers, but it suddenly begins communicating with an external server and transferring large amounts of data.

 

Network Performance Data: This data is most often used for performance and uptime monitoring and maintenance, but it can also be leveraged for security purposes. For example, availability of Voice over IP (VoIP) networks is critical, because any interruptions may cripple telephone service in a business. CPU and system resource pressure may indicate a DoS attack.

 

Statistical Analysis and Machine Learning: Allows us to determine possible anomalies based on how threats are predicted to be instantiated. This involves consuming and analyzing large volumes of data using specialized systems and applications for predictive analytics, data mining, forecasting, and optimization. For example, a statistics-based method might detect anomalous behavior, such as higher-than-normal traffic between a server and a desktop. This could indicate a suspicious data dump. A machine learning-based classifier might detect patterns of traffic previously seen with malware.

 

Deriving correlated, high fidelity outputs from large amounts of event data has seeded the need for different methods of its analysis. The large number of solutions and vendors in the SIEM, MSSP, and MDR spaces indicates how important event ingest and correlation has become in the evolving threat landscape as organizations seek a full view of their networks from a monitoring and attack mitigation standpoint.

 

Hopefully this blog series has been a catalyst for discussions and reviews. Many of you face challenges trying to get management to understand the need for formal reviews and documentation. Presenting data on real-world breaches and their ramifications may be the best way to get attention, as is reminding decision makers of their biggest enemy: complacency.

Can you believe THWACKcamp is only a week away?! Behind the scenes, we start working on THWACKcamp in March, maybe even earlier. I really hope you like what we have in store for you this year!

 

As always, here are some links I found interesting this week. Enjoy!

 

Florida man arrested for cutting the brakes on over 100 electric scooters

As if these scooters weren't already a nuisance, now we have to worry about the fact that they could have been tampered with before you use one. It's time we push back on these thing until the service providers can demonstrate a reasonable amount of safety.

 

Groundbreaking blood test could detect over 20 types of cancer

At first I thought this was an old post for Theranos, but it seems recent, and from an accredited hospital. As nice as it would be to have better screening, it would be nicer to have better treatments.

 

SQL queries don't start with SELECT

Because I know some of y'all write SQL every now and then, and I want you to have a refresher on how the engine interprets your SELECT statement to return physical data from disk.

 

Facebook exempts political ads from ban on making false claims

This is fine. What's the worst that could happen?

 

Data breaches now cost companies an average of $1.41 million

But only half that much for companies with good security practices in place.

 

Decades-Old Code Is Putting Millions of Critical Devices at Risk

Everything is awful.

 

How Two Kentucky Farmers Became Kings Of Croquet, The Sport That Never Wanted Them

A bit long, but worth the time. I hope you enjoy the story as much as I did.

 

 

Even as the weather turns cold, we continue to make time outside in the fire circle.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article from my colleague Jim Hansen about ways to reduce insider threats. It comes down to training and automation.

 

A recent federal cybersecurity survey by SolarWinds found federal IT professionals feel threats posed by careless or malicious insiders or foreign governments are at an all-time high. Worse, hackers aren’t necessarily doing technical gymnastics to navigate through firewalls or network defenses. Instead, they’re favoring some particularly vulnerable targets: agency employees.

 

Who hasn’t worked a 12-hour shift and, bleary-eyed at the end of a long night, accidentally clicked on an email from a suspicious source? Which administrator hasn’t forgotten to change user authorization protocols after an employee leaves an agency? A recent study found 47% of business leaders claimed human error caused data breaches within their organizations.

 

The “People Problem”

 

Phishing attacks and stealing passwords through a keylogger attack are some of the more common threats. Hackers have also been known to simply guess a user’s password or log in to a network with former employees’ old credentials if the administrator neglects to change their authorization.

 

This “people problem” has grown so big, attempting to address the problem through manual security processes has become nearly impossible. Instead, agency IT professionals should automate their security protocols to have their systems look for suspicious user patterns and activities undetected by a human network administrator.

 

Targeting Security at the User Level

 

Automating access rights managing and user activity monitoring brings security down to the level of the individual user.

 

It can be difficult to ascertain who has or should have access rights to applications or data, particularly in a large Department of Defense agency. Reporting and auditing of access rights can be an onerous task and can potentially lead to human error.

 

Automating access rights management can take a burden off managers while improving their security postures. Managers can leverage the system to assign user authentications and permissions and analyze and enforce those rights. Automated access rights management reinforces a zero-trust mentality for better security while ensuring the right people have access to the right data.

 

User activity monitoring should be considered an essential adjunct to access rights management. Administrators must know who’s using their networks and what they’re doing while there. Managers can automate user tracking and receive notifications when something suspicious takes place. The system can look for anomalous behavioral patterns that may indicate a user’s credentials have been compromised or if unauthorized data has been accessed.

 

Monitoring the sites users visit is also important. When someone visits a suspicious website, it’ll show on a user’s log report. High risk staff should be watched more closely.

 

Active Response Management

 

Some suspicious activity is even harder to detect. The cybercriminal on the other end of the server could be gathering a treasure trove of data or the ability to compromise the defense network, and no one would know.

 

Employing a system designed to specifically look for this can head off the threat. The system can automatically block the IP address to effectively kick the attacker out, at least until they discover another workaround.

 

Staying Ahead in the Arms Race

 

Unfortunately, hackers are industrious and indefatigable. The good news is we now know hackers are targeting employees first. Administrators can build automated defenses around this knowledge to stay ahead.

 

Find the full article on Fifth Domain.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

vSphere, which many consider to be the flagship product of VMware, is virtualization software with the vCenter management software and its ESXi hypervisor. vSphere is available in three different licenses: vSphere Standard, vSphere Enterprise Plus, and vSphere Platinum. Each comes with a different cost and set of features. The current version for vSphere is 6.7, which includes some of the following components.

 

Have a spare physical server lying around that can be repurposed? Voila, you now have an ESXi Type 1 Hypervisor. This type of hypervisor runs directly on a physical server and doesn’t need an operating system. This is a perfect use case if you have an older physical server lying around that meets the minimum requirements. The disadvantages to this setup include higher costs, a rack server, higher power consumption, and lack of mobility.

 

What if you don’t have a physical server at your disposal? Your alternative is an ESXi Type 2 Hypervisor because it doesn’t run on a physical server but requires an operating system. A great example is my test lab, which consists of a laptop with the minimum requirements. The laptop includes Windows 10 Pro as its host operating system, but I have my lab running in a virtual image via VMware Workstation. The advantages to this setup include minimal costs, lower power consumption, and mobility.

 

To provide some perspective, the laptop specifications are listed below:

  • Lenovo ThinkPad with Windows 10 Pro as the host operating system
  • Three hard drives: (1) 140GB as the primary partition and (2) 465GB hard drives to act as my datastores (DS1 and DS2 respectively) with 32GB RAM
  • One VMware ESXi Host (v6.7, build number 13006603)
  • Four virtual machines (thin provisioned)
    • Linux Cinnamon 19.1 (10GB hard drive, 2GB RAM, one vCPU)
    • Windows 10 Pro 1903 (50GB hard drive, 8GB RAM, two vCPUs)
    • Windows Server 2012 R2 (60GB hard drive, 8GB RAM, two vCPUs)
    • Pi-Hole (20GB hard drive, 1GB RAM, one vCPU)

 

With the introduction of vSphere 6.7, significant improvements were created over its predecessor vSphere 6.5. Some of these improvements and innovations include:

  • Simple and efficient management at scale
  • Two times faster than v6.5
  • Three times less memory consumption
  • New APIs improve deployment and management of the vCenter Appliance
  • Single reboot and vSphere Quick Boot reduce upgrade and patching times
  • Comprehensive built-in security for the hypervisor and guest OS also secures data across the hybrid cloud
  • Integrates with vSAN, NSX, and vRealize Suite
  • Supports mission-critical applications, big data, artificial intelligence, and machine learning
  • Any workloads can be run, including hybrid, public, and private clouds
  • Seamless hybrid cloud experience with a single pane of glass to manage multiple vSphere environments on different versions between an on-premises data center and any vCenter public, like VMware on AWS

 

If you’re interested in learning more about vSphere, VMware provides an array of options to choose from, including training, certifications, and hands-on labs.

I’ve been lucky to meet SolarWinds customers almost everywhere on this planet with two notable exceptions: Australia and the Middle East. And this year I’m happy to make a long overdue first visit to GITEX 2019 in Dubai, Oct 6-10!

 

 

GITEX is one of a handful of technology conferences that’s truly a crossroads of customers and partners of every kind. Only at GITEX it’s not just tech and vendors you know- the event runs under a glitzy, future-forward umbrella that encourages attendees to innovate and breathe new approaches into operations.

 

IT change is at its heart culture change, and there can never be enough reminders to think big and outside of the routine. Of course, the global server, wireless, app, cloud, integrator and MSP vendors we all know will be there in force along with 100,000+ attendees. But GITEX also includes some of the latest bleeding-edge technology applied to everyday challenges. I generally board the flight home from conferences like this with my head spinning, full of great new ideas.

 

Even better, GITEX is also multicultural in a way that few other tech conferences can match. It’s not just about the “what” of the job, IT is a human endeavor, and the “who” is every bit as, if not more, important. Technology and technologists tend to flock together and operate similar national or regional habits. But events like GITEX give you a chance to discover surprising new ways to solve problems with technology, as unique as the sharp new people you meet. And judging by all the THWACK conversations of customers in the region, there are a lot indeed.

 

If you’re at GITEX, please come by booth H7-D40 and say hello. These days I’m thinking about how we bridge the gap between the transformation our businesses are asking us for, while still ensuring the apps we already deliver continue to make users happy.  I’d love to hear what you’re working on, especially what’s working great for you.

 

Cheers!

So far in this virtualization series, we’ve discussed its history, the pre-eminence of server virtualization, the issues it has created, and how changing demands placed on our infrastructure are leading us to consider virtualizing all elements of our stack and move to a more software-defined infrastructure model.

 

In this post, we explore the growing importance of infrastructure as code and the part virtualization of our entire stack plays in delivering it.

 

Why Infrastructure as Code (IaC)

According to Wikipedia, IaC is

 

“the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The IT infrastructure managed by this comprises both physical equipment such as bare-metal servers as well as virtual machines and associated configuration resources.”

 

Why is this important?

 

Evolving Infrastructure Delivery

Traditional approaches need us to individually acquire and install custom hardware, then configure, integrate, and commission all elements of it before presenting our new environment. This both introduced delays and opened risks.

 

As enterprises demand more flexible, portable, and responsive infrastructure, these approaches are no longer acceptable. Therefore, the move to a more software-defined, virtual hardware stack is crucial to removing these impediments and meeting the needs of our modern enterprise. IaC is at the heart of this change.

 

The IaC Benefit

If you need the ability to deliver at speed, at scale, and with the consistency of building, changing, or extending infrastructure following your best practices, then Infrastructure as Code is worthy of your consideration.

 

What does this have to do with virtualization? If we want to deploy as code, then our infrastructure must in its own way be code. Virtualizing is our way to software define it and provide the ability to deploy anywhere compatible infrastructure exists.

 

IaC in Action

How does IaC work? Public cloud perhaps provides us with the best examples, as we can automate cloud infrastructure deployment at scale and with consistency, unaffected by concerns of underlying hardware infrastructure.

 

If I wanted to create 100 virtual Windows desktops, I can, via an IaC template, call a predefined image, deploy on to a predefined VM, connect to a predefined network, and automate the delivery of 100 identical desktops into the infrastructure.

 

Importantly, the template means I can recreate the infrastructure whenever I like and, regardless of who deploys it, know it will be consistent, delivering the same experience every time.

 

The real power of this approach doesn’t come from a template only working in one place. The increasing amount of standard approaches will allow us to deploy the same template in multiple locations. When our template can be deployed across multiple environments, in our data center, public cloud, or a mix of both, it provides us with flexibility and portability crucial to modern infrastructure.

 

As our enterprises demand quick consistent response, at scale, across hybrid environments, then standardizing our deployment models is crucial. This can only be done if we standardize our infrastructure elements and this ties us back to the importance of virtualization in the delivery not only of our contemporary infrastructure but also the way we will deliver infrastructure in the future.

 

We started this series asking the question of whether virtualization would remain relevant in our future infrastructure. In the final part, we’ll look to summarize how future virtualization will look and why its concepts will remain a core part of our infrastructure.

Back from Austin and THWACKcamp filming. Can you believe the event is only two weeks away? I'm excited for what we have in store for you this year. It's a lot of work to pull TC together, but the finished product always makes me smile. Wearing the bee suit helps, too.

 

As always, here are some links I found interesting this week. Enjoy!

 

15,000 private webcams left open to snooping, no password required

The manufacturers of these devices should be held accountable. Until actions are taken against the makers, we will continue to have incidents like this.

 

Microsoft: Customers are entitled to know about federal data requests

Great moment for Microsoft here, stepping forward as an advocate for customer privacy rights.

 

Crown Sterling Claims to Factor RSA Keylengths First Factored Twenty Years Ago

A silly marketing stunt, and I have no idea why they would do this except the idea that there's no such thing as bad publicity. But I think they're hurting their reputation with stunts like this one.

 

Doordash Discloses Massive Data Breach That Affected 4.9 Million People

Interesting that new users are not affected. Makes me think perhaps the hackers got hold of an older database, maybe a backup.

 

The simplest explanation of machine learning you’ll ever read

Next time you're in a meeting and someone brings the machine learning hype, just ask yourself, "Do we need a label maker?"

 

IBM will soon launch a 53-qubit quantum computer

I'm excited for the possibilities brought about by quantum computing, and cautiously optimistic this won't result in Skynet.

 

Banks, Arbitrary Password Restrictions and Why They Don't Matter

Great summary of the security issue faced by online banking.

 

If you ever get the chance to have a beef rib at Terry Black's in Austin, you will not be disappoint:

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp about emerging technology adoption in fed. It’s easy to get caught up with the hype, but our IT Trends survey results seem more grounded.

 

Exciting new technologies like artificial intelligence, blockchain, and the Internet of Things are dominating news cycles, but are they dominating federal IT environments? Maybe not.

 

According to the latest SolarWinds® IT Trends Report, emerging technology may be more of a pain than a benefit. Public sector IT managers in North America, the U.K., and Germany said they believe they’re currently ill-equipped to manage AI and blockchain with their current skillsets. Meanwhile, these same managers believe they need more training on the cloud and hybrid IT, established technologies we all seem to take for granted.

 

What’s going on here?

 

For many agencies, AI and blockchain are not yet considered essential. Agencies aren’t heavily investing in AI training, and managers don’t have the time or inclination to teach themselves about the tools.

 

On the other hand, survey respondents said they expected cloud and hybrid IT to be the most important technologies to learn about over the next three to five years. They also noted developing skills to manage hybrid IT environments has been one of their top priorities over the past 12 months. This indicates the importance of the cloud and hybrid IT for their organizations.

 

Managers want to learn, but it’s hard to do when they’re also trying to migrate legacy applications to the cloud. The migration process takes time and juggling new projects while also trying to “keep the lights on” will always be a challenge.

 

Still, respondents listed “technology innovation” as their top career development goal over the next three to five years. How can they achieve this goal with so many obstacles seemingly in their way?

 

Leverage Third-Party Contractors With Specific Expertise

 

Third-party contractors aren’t just for implementing technology roadmaps; they’re also excellent sources of knowledge. What better way for an agency’s IT team to learn than from a skilled contractor working on-site? A contractor can show the team how it’s done and equip agency staff with the necessary knowledge.

 

Encourage Participation in User Groups and Online Forums

 

There are several free government-centric user groups where IT managers can find answers to questions and hone their skills. They’re great resources for problem-solving and learning about new technologies.

 

There are also online forums and communities professionals can leverage. From technology-specific communities to internal government message boards, there’s a strong argument for interacting with like-minded individuals willing to help each other out.

 

Attend Trade Shows and Industry Events

 

Trade shows and industry events can be exceptional resources for learning about what’s next. Everyone from less experienced employees to more seasoned professionals can benefit from sitting in on workshops, listening to presentations, or simply wandering the show floor. Here’s a great one coming up this year… Blockchain Expo North America.

 

Regardless of how it’s done, agencies and managers must invest in learning about emerging and evolving technologies because AI, blockchain, and the cloud will affect the careers of public sector IT professionals for the next several years.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

alrasheed

What Is VMware NSX?

Posted by alrasheed Oct 1, 2019

When purchasing Nicira in 2012, VMware wanted to further establish itself as a leader in software-defined networking, and more importantly, in cloud computing. VMware NSX provides the flexibility network administrators have long desired without having to rely solely on network hardware and its primary roles include automation, multi-cloud networking, security, and micro-segmentation. Additionally, the time spent provisioning a network is dramatically reduced.

 

NSX, the VMware software-defined networking (SDN) platform, lets you create virtual networks, including ports, switches, firewalls, and routers, without having to rely on a physical networking infrastructure. It’s software-based, nothing physical is involved, but the networks created can be seen from a virtual perspective. Simply put, the NSX network hypervisor allows network administrators to create and manage virtualized networks. Virtual networking allows communication without the use of network cables across various devices, including computers, virtual machines, and servers, using software.

 

However, virtual networks are provisioned, configured, and managed using the underlying physical network hardware. It’s important to keep this in mind, and it’s not my intention to say otherwise or mislead anyone.

 

Given the choice between a physical and software-defined network infrastructure, I prefer SDN. Physical networking devices and the software built into them depreciate over time. These devices also take up space in your data center—and the electricity needed to keep them powered on. Also, time is money, and it takes time to configure these devices.

 

Software-defined networks are easier on the eyes, which includes not having to worry about network cables or a cable management solution. Does anyone truly find the image below appealing?

 

Poor Cable Management

 

Virtual networking provides the same capabilities as a physical network, but with more flexibility and more efficiency across multiple locations without having to replace hardware or comprising reliability, availability, and security. Devices are connected over software using a vSwitch. Communications between the various devices can be shared on one network or separately on a different network. With software-defined networks, templates can be created and modified as needed without having to reinvent the wheel as you would with a physical networking device.

 

NSX-T is VMware’s next-generation SDN, which includes the same features as its predecessor NSX-V. The main difference is its ability to run on different hypervisors without having to rely on a vCenter Server.

 

VMware NSX editions include Standard, Professional, Advanced, Enterprise Plus, and Remote Office Branch Office (ROBO), each with a unique case. For example, Standard provides automated networking while Professional includes this as well plus Micro-Segmentation. For detailed information about each edition, please review the NSX datasheet here.

 

If you’re interested in learning more about NSX, VMware provides an array of options to choose from, including training, certifications, and hands-on labs.

Here’s the fifth part of the series I’ve been writing regarding HCI. Again, I don’t want to sway my audience from one platform to another. Rather, I want them to follow the edict caveat emptor. It’s mission-critical for the future of the environment we’re talking about here for the buyer to be aware of the available options. You need to evaluate the functionality you require as well as those covered in the existing products and their roadmaps, and determine which platform best satisfies your needs.

 

In some cases, the “business as usual” approach may work best—the virtualization platform, whether VMware, Xen Server, KVM, or OpenStack, and even some container-based platforms, must be maintained, expanded, or leveraged for the scalability and approach the business requires. To be clear, most of what can be supported on HCI may be accomplished (within certain parameters) on a more piecemeal virtualization approach. For example, should you choose to use Acropolis as your virtualization platform, you’ll require a Nutanix environment. HyperCore (a KVM-based hypervisor) will point you toward Scale Computing. Again, the goal here is to ensure the alternative hypervisor satisfies the needs. At one point, I had heard a significant percentage of the customers who began with Acropolis subsequently migrated toward VMware. I’m unsure if there are stats on where it stands today, but it’s important to note. Remember, this is no disparagement of Acropolis, but it’s not necessarily right for everyone. The relevancy of this previous statement, of course, is to know your virtualization requirements, and try to ensure you’re covering those bases.

 

I’ve stressed scalability from both within (storage horsepower versus CPU horsepower), and how these environments can expand from within these categories, or not. It may or may not be relevant for the customer’s needs to choose a platform capable of expanding incrementally in those categories. Should it be relevant, the choice is beholden upon the customer to ensure they’ve chosen the correct platform, because the cost of these choices can be profound.

 

I’ve also stressed how the storage environment within an HCI platform may or may not have features important to the overall strategy. Are deduplication, compression, and replication part of your requirement set? If so, surely this need will affect your choice. I want to stress how important this requirement can be, particularly to overall backup/recovery/DR needs. In this case, it’s important for the customer to take these issues into account. Knowing your concerns will determine whether this is relevant for your strategy.

 

Remember, the goal here is to build a platform to satisfy these requirements for the entire depreciation period, so you’ll be able to support your requirements until the equipment reaches end of life.

 

With no bias toward or against any platform, the true value of how you choose to go with this, or even if you choose not to go with this, will be determined by scalability, management, storage, and ultimately how functional the environment is moving forward.

In the first two parts of this series, we looked at both the history of virtualization and the new set of problems it’s introduced into our infrastructures.

 

In part three, we’ll investigate its evolution and how it will continue to be part of our technology deployments.

 

A Software-Defined Future

 

The future for virtualization, in my opinion, comes in looking beyond our traditional view.

Thinking about its basic concept could give us a clue to its future. At its base level, server virtualization takes an environment made up of applications and operating systems installed on a specific hardware platform and allows us to extract the environment from the hardware and turn it into something software-based only, something software-defined.

 

Software-defined is a growing movement in modern IT infrastructure deployments. Extracting all elements of our infrastructure from the underlying hardware is key when we want to deploy at speed, at scale, and with the flexibility to operate infrastructure in multiple and differing locations. For this to work we need to software-define beyond our servers.

 

Software Defining Our Infrastructure

 

Storage and networking are now widely software-defined, be it by specialist players or major vendors. They’ve realized the value of taking things previously tied to custom hardware and packaging them to be quickly and easily deployed on any compatible hardware.

 

Why is this useful? If we look at what we want from our infrastructure today, much of it has been defined by what we see of how hyperscale cloud providers deliver infrastructure. None of us knows, or really cares, about what sits under the covers of our cloud-based infrastructure—our interest is only in what it delivers. If our cloud provider swaps its entire hardware stack overnight, we wouldn’t know, but if our infrastructure continued to deliver the outcomes we had designed it for, it wouldn’t matter.

 

Without software-defining our entire stack, there’s little chance we can deploy on-premises with the same speed and scale seen in cloud, making it difficult for us to operate the way businesses increasingly demand.

 

Is Software-Defined Virtualization?

 

This article may raise the question, “Is software-defined really virtualization?” In my opinion, it certainly is. As discussed earlier, virtualization is the separation of software from hardware dependency, providing the flexibility to install the software workload on any compatible hardware. This really is the definition of software-defined, be it storage, networking, or more traditional servers.

 

The Benefits of Software-Defined

 

If virtual, software-defined infrastructures are to continue to be relevant, they need to be able to meet modern and future demands.

 

The infrastructure challenges within the modern enterprise are complex, and we’ve needed to change the way we approach infrastructure deployment. We need to respond more quickly to new demands, and custom hardware restrictions will limit our ability to do so.

 

Virtualizing our entire infrastructure means we can deliver at speed and with consistency, in any location, on any compatible hardware, with the portability to move it as needed for performance and scale, without disruption. All this is at the core of a successful modern infrastructure.

 

In the next part of this series, we’ll look at how infrastructure deployment is developing to take advantage of software-defined and how methodologies such as infrastructure as code are essential to our ability to deliver consistent infrastructure, at scale and speed.

Hi there! It's Suzanne again. You might remember me from The Actuator April 10th, where I stepped in for Tom because he was busy "doing things." He's on his way to yet another conference and asked me to help out, as if I don't have enough things to do while he's away. Of course I agreed to do it, but not before I made him promise to build me a fire outside and serve me a cocktail.

 

So, here are some links I found interesting this week. Hope you enjoy!

 

People v mosquitos: what to do about our biggest killer

As I sit here in our yard, swatting away mosquitoes, I think it's time for us to eradicate them from the face of the Earth. And if this process involves flamethrowers, sign me up.

 

Seven Ways Telecommuting Has Changed Real Estate

As someone who managed a co-working space and now works from home as Director of Lead Generation for a real estate team, every one of these points rings true.

 

WeWork unsecured WiFi exposes documents

Speaking of co-working spaces, WeWork shows how to not do network security properly. I bet the printers in the office are storing every page scanned too! Oh, WeWork (sigh).

 

The true magic of writing happens in the third draft

For me, the true magic of writing happens during the third cocktail.

 

Google Says It's Achieved Quantum Supremacy, a World-First: Report

Tom keeps mumbling to me about quantum computing, so I'll include this for him. I'm not worried about Google achieving this, because it's likely they'll kill the product in less than 18 months.

 

What to Consider About Campus Safety, Wellness

As we start touring campuses with our children, these types of questions become important. Is it wrong to expect your 18-year-old (who's away at school) to check in with you daily? Asking for a friend.

 

7 Reasons Why Women Speakers Say No to Speaking & What Conference Organizers Can Do About It

Second of a 2-part series that talks about why women turn down speaking engagements. I remember the time Tom arranged for an all-women speaking event, 24 women speakers in total. It took longer to arrange, and the process was more involved, but I was proud he made the effort.

 

Out for a morning walk last week and we stumbled upon this beautiful view of a pond, with steam rising off. #Exhale

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article from my colleague Jim Hansen about the Navy’s new cybersecurity program. There’s no doubt our troops rely on technology and cyberthreats are increasing.

 

The Navy’s new Combat to Connect in 24 Hours (C2C24) is an ambitious program with the potential to change naval warfare as we know it.

 

The program is designed to improve operational efficiency by automating the Navy’s risk management framework (RMF) efforts; providing sailors with near real-time access to critical data; and accelerating the Navy’s ability to deploy new applications in 24 hours rather than the typical 18 months.

 

C2C24 is using open-source technologies and a unique cloud infrastructure to reduce the network attack surface and vulnerabilities. The Navy is standardizing its network infrastructure and data on open-source code and using a combination of shore-based commercial cloud and on-ship “micro cloud” for information access and sharing.

 

But malicious nation states are continually seeking ways to compromise defense systems—and they tend to be able to react and adjust quickly. As Navy Rear Adm. Danelle Barrett said, “Our adversaries don’t operate on our POM (program objective memorandum) cycle.”

 

With its ship-to-shore infrastructure, C2C24 could provide an enticing target. To complete its C2C24 mission, the Navy should pay special attention to the final two phases of the RMF: information system authorization and security controls monitoring.

 

Knowing Who, When, and Where

 

With C2C24, roughly 80 percent of mission-critical data will be stored on the ship. This will allow personnel to make operational decisions in real time without having to go back to the shore-based cloud to get the information they need at a moment’s notice.

 

But what if someone were to compromise the onshore cloud environment? Could they then also gain access to the ship’s micro cloud and, by extension, the ship itself?

 

It’s important for personnel to be notified immediately of a possible problem and be able to pinpoint the source of the issue so it can be quickly remediated. They need to see precisely what’s happening on the network, whether the activity is happening onshore, onboard the ship, or over the Consolidated Afloat Networks and Enterprise Services (CANES) system, which the Navy intends to use to deliver C2C24.

 

They also need to be able to control and detect who’s accessing the network. This can be achieved through controls like single sign-on and access rights management. Security and event management strategies can be used to track suspicious activity and trace it back to internet protocol addresses, devices, and more.

 

In short, it’s not just about getting tools and information quickly, but about thinking of the entire RMF lifecycle, from end to end. In the beginning, it’s about understanding the type of information being processed, where it’s stored, and how it’s transmitted. In the end, it’s about controlling access to information and monitoring it.

 

This is particularly important on a shipboard environment where information means different things to different people. A person managing course corrections will need access to a particular data set, while someone managing weapons targeting may need different data altogether.

 

Controlling and monitoring the information flow is paramount to making sure data stays in the right hands. Further, ensuring the data is the expected data and not misinformation injected into the system by bad actors who have compromised the infrastructure is equally important.

 

Malicious attackers aren’t the only threat.

 

Security is not the only concern. One of the core goals of C2C24 is to make the Navy’s operations run more efficiently. Information and applications are to be obtained more quickly so warfighters have what they need in a more expedited manner.

 

But different incidents can undermine this effort. A commercial cloud failure or lost satellite connectivity could play havoc with a ship’s ability to receive and send information to and from shore. These issues can compromise commanders’ abilities to make decisions that can affect current and future operations.

 

Thus, it’s just as important to keep tabs on network performance as it is to check for potentially malicious activity. Commanders must be alerted to network slowdowns or failures immediately. Meanwhile, personnel must have visibility into the source of these issues so they can be quickly rectified and the network can be restored to an operational state.

 

Fortunately, the fact the Navy is basing C2C24 on a standardized infrastructure open source tool makes this easier. It’s simpler to monitor a single set of standardized network ports, for example, than it is to monitor non-standardized ports and access points. And an open source infrastructure lays the groundwork for any number of monitoring solutions to provide better visibility and network security.

 

This standardization makes C2C24 a visionary program with the potential to redefine the Navy’s ability to adapt quickly to any situation and significantly improve its security posture. Warfighters will have the right information and applications much faster than before, and data security will be greatly improved—particularly if a government network monitoring solution is made an instrumental part of the effort.

 

Find the full article on SIGNAL.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

alrasheed

The vCommunity

Posted by alrasheed Sep 24, 2019

We all want to be part of something special and have a common bond. It includes doing what’s best for you but also keeping others in mind to provide them the support they need to be successful in life, professionally and personally. Teams are defined by the sacrifices we make for each other in hopes of succeeding in a manner beneficial to everyone, while also providing an experience to lead to a positive and fruitful relationship for years to come.

 

We’ve all been there. The feeling of hopelessness. No matter what you do or say, and regardless of your contributions, you feel neglected or underappreciated. It creates a feeling of emptiness, as if you don’t belong or you’re on the outside looking in. Regardless of your profession, we each have a set of standards, morals, and core values in place which we should adhere to.

 

These neglected and underappreciated sentiments described how I felt in my career roughly two years ago. It was a constant feeling of always being at the bottom of the proverbial totem pole. A “take one step forward and two steps back” mentality. The approach of taking the high road felt like a dead end or being stuck in a roundabout where only left turns are permitted, like a NASCAR race, but considerably less fun.

 

When I discovered the vCommunity, everything changed. There was light at the end of the tunnel. There was hope I never realized existed, and it changed my outlook on my professional career and helped guide me personally. The vCommunity shares the same values I preach, like unity, joint ownership, team, generosity, and most importantly, that kindness matters. You get what you put into it, and I can assure you it’s been a blessing in disguise I wish I had discovered years ago.

 

The vCommunity includes individuals, groups, organizations, and everyday people located across the globe in areas you’d never imagine. But it doesn’t matter because we serve one purpose—to help one another as best as possible. A simple five-minute conversation has the potential to turn into something special for both parties.

 

I’ve shared my experiences, and each has provided wonderful returns. Examples include tech advocacy programs like vExpert, Cisco Champion, Veeam Vanguard, and the Nutanix Technology Champions. I’ve had the pleasure to be introduced to wonderful people in the IT industry from across the globe thanks to my good friends at Tech Field Day and Gestalt IT. I’m now considered an independent influencer who has a passion for connecting people with technology through blogging.

 

None of this would be possible without the aforementioned groups, and there’s no chance I’d consider myself a blogger without their support.

 

There are additional methods to connect with fellow vCommunity members, and they don’t have to include any association with a group or program. How so, you ask? By simply using your voice and connecting with podcasters to share your stories and experiences. You’d be surprised how much of an influence this can be for someone. I’ve had the pleasure to join Datanauts, vGigacast, Virtual Speaking Podcast, Gestalt IT, and Technically Religious. Each has provided me with a platform to help others and the feedback has been tremendous. My biggest take is the influence it has had on others. I’m humbled to know I’ve had a positive effect on someone.

 

Additionally, I recommend the following podcasts because they provide quality content with valuable information and resources. They include Cisco Champion Radio, The CTO Advisor, DiscoPosse Podcast, Nerd Journey Podcast, Nutanix Community Podcast, Packet Pushers Community Show, Real Job Talk, Tech Village Podcast, The VCDX Podcast, ExploreVM Podcast, Veeam Community Podcast, VMUG Professional Podcast, Virtual Design Master, and the VMware Communities Podcast.

 

Let’s recap and discuss why I’ve taken the time to share this with you. I want you to grow, be empowered, and be successful. It’s my goal to help someone achieve these goals by providing any assistance possible. #GivingBack should a requirement because nobody can achieve success without assistance from someone. For me, Jorge Torres and William Lam led me down the path to this point, and I’ll always owe them for believing in me.

 

I realize there are plenty of examples of giving back and I wish I could acknowledge every one of you, but you know who you are, and I thank you for it. The moral of the story is be happy, give back, and you’ll be rewarded for your contributions and dedication to the #vCommunity. Lead by example and others will follow.

 

“For fate has a way of charting its own course, but before one surrenders to the hands of destiny, one might consider the power of the human spirit and the force that lies in one’s own free will.” Lost: The Final Chapter

This is the fourth post of my series on hyperconverged infrastructure (HCI) architectural design and decision-making. For my money, the differences between these diverse systems is a function of the storage involved in the design. On the compute side, these environments use x86 and a hypervisor to create a cluster of hosts to support a virtual machine environment. Beyond some nuances in the hardware-based models, networking tends toward a similar approach in each. But often, the storage infrastructure is a differentiator.

 

Back in 2008, LeftHand Networks (later acquired by HPE) introduced the concept of a virtual storage appliance. In this model, the storage would reside within the ESX servers in the cluster, become aggregated as a virtual iSCSI SAN, and allow for redundancy through the nodes. Should an ESX host crash, with the standard function of the VMs rebooting on a different host in the cluster, the storage would allow for consistency regardless. By today’s standards, it’s not at all inelegant, but lacks some of the functionality of, for example, vSAN. VMware vSAN follows a similar model, but can also incorporate deduplication, hybrid or all solid-state disc, and compression. To me, vSAN used in vSAN-ready nodes, also a component of Dell/EMC VxRail product, is a modernized version of what LeftHand brought to the table some 11 years ago. It’s a great model and eliminates the need for a company to build a virtualized infrastructure to purchase a more traditional SAN/NAS infrastructure to connect the virtualized environment. Cost savings and management make this more cost-effective.

 

Other companies in the space have leveraged the server-based storage model. The two that spring most rapidly to mind are Nutanix and Simplivity, who have built solutions based on packaged single SKUed boxes built around a similar model. Of course, the way to manage the environments are different, but support the goal of managing a virtual landscape with some aspects of differentiation (Nutanix supports their hypervisor, Acropolis, which nobody else does). From a hardware perspective, the concept of packaged equipment sized to manage a particular environment is practically the same: x86 servers run the hypervisor, with storage internal to each node of the cluster.

 

I’ve talked previously about some of the scalability issues that may or may not affect end users, so I won’t go deeper into it here. Feel free to check out some of my previous posts about cluster scalability issues causing consternation about growth.

 

But storage issues are still key, regardless of the platform you choose. I believe it’s one of only two or three issues of primary concern. While compression, deduplication, and the efficiency of how SSD is incorporated are key to using storage, there’s more. One of the keys to backing up the data in a major use-case for HCI, the hub-and-spoke approach in which the HCI sits on the periphery and a more centralized data center resides as the hub, is the replication of all changed data from the remote to the hub, with storage awareness.

 

I feel many of the implementations I’ve been part of have had the HCI as ROBO (remote office/back office), VDI, or a key application role and require a forward-thinking approach to the backup of these datasets. If you, as the decision-maker, value that piece as well, look at how the new infrastructure would handle the data and be able to replicate it (hopefully with no performance impact to the system) so all data is easily recoverable.

 

When I enter these conversations, if the customer doesn’t concern themselves with backup or security from the ground-up, mistakes are being made. I try to emphasize this is likely the key consideration from the beginning. 

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.