1 2 3 4 5 Previous Next

Geek Speak

2,837 posts

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Mav Turner with tips on monitoring and troubleshooting distributed cloud networks in government agencies. I have come to expect a bit of skepticism from government customers about cloud adoption, but I’m seeing more evidence of it daily.


The Office of Management and Budget’s Cloud Smart proposal signals both the end of an era and the beginning of new opportunities. The focus has shifted from ramping up cloud technologies to maximizing cloud deployments to achieve the desired mission outcomes.


Certainly, agencies are investing heavily in these deployments. Bloomberg Government estimates federal cloud spending will reach $6.5 billion in fiscal year 2018, a 32% increase over last year. However, all the investment and potential could be for naught if agencies don’t take a few necessary steps toward monitoring and troubleshooting distributed cloud networks.


1. Match the monitoring to the cloud. Different agencies use a variety of cloud deployments: on-premises, off-premises, and hybrid. Monitoring strategies should match the type of infrastructure in place. A hybrid IT infrastructure, for example, will require monitoring that allows administrators to visualize applications and data housed both in the cloud and on-premises.


2. Gain visibility into the entire network. It can be difficult for administrators to accurately visualize what’s happening within complex cloud-based networks. It can be tough to see what’s happening when data is being managed outside of the organization.


Administrators must be able to visualize the entire network, so they can accurately pinpoint the root cause of problems. Are they occurring within the network or the system?


3. Reduce mean time to resolution. Data visualization and aggregation can be useful in minimizing downtime when a problem arises, especially if relevant data is correlated. This is much better spending the time to go to three different teams to solicit the same information, which may or may not be readily available.


4. Monitor usage and automate resource lifecycle to control costs. Agencies should carefully monitor their cloud consumption to avoid unnecessary charges their providers may impose. They should also be aware of costs and monitor usage of services like application programming interface access. Often, this is free—up to a point. Being aware of the cost model will help admins guide deployment decisions. For example, if the cost of API access is a concern, administrators may also consider using agent-based monitoring, which can deliver pertinent information without having to resort to costly API calls.


The other key to keeping costs down in a government cloud environment is ensuring a tight resource lifecycle for cloud assets. Often, this will require automation and processes to prevent resources from existing beyond where they’re needed. Just because admins think they're no longer using a service doesn’t mean it doesn’t still exist, running up charges and posing a security risk. Tight control of cloud assets and automated lifecycle policies will help keep costs down and minimize an agency's attack surface.


5. Ensure an optimal end-user experience. Proactively monitoring end-user experiences can provide real value and help ensure the network is performing as expected. Periodically testing and simulating the end-user experience allows administrators to look for trends signaling the cause of network problems (periods of excessive bandwidth usage, for example).


6. Scale monitoring appropriately. Although many government projects are limited in scope, agencies may still find they need to scale their cloud services up or down at given points based on user demand. Monitoring must be commensurate with the scalability of the cloud deployment to ensure administrators always have a clear picture of what’s going on within their networks.


Successfully realizing Cloud Smart’s vision of a more modern IT infrastructure based on distributed cloud networks will require more than just choosing the right cloud providers or type of cloud deployments. Agencies must complement their investments with solutions and strategies to make the most of those investments. Adopting a comprehensive monitoring approach encompassing the entire cloud infrastructure is the smart move.


Find the full article on Government Computer News.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Obtaining a VMware certification requires effort, dedication, and a willingness to learn independently with a commitment to the subject matter.


To fulfill the requirements made by VMware, you must attend a training class before taking the exam. The cost for each of these courses provided by VMware is extremely high, but I found an alternative to fulfill this requirement at a very affordable rate. In my case, I decided to use my personal funds to meet the training requirements made by VMware to pursue the certification. Without question, cost was the deciding factor and thanks to Andy Nash, I discovered Stanly Community College. The course is taught completely online and covers the installation, configuration, and management of VMware vSphere. It also meets all the training requirements made by VMware. If you’re interested in pursuing training provided by VMware, please review the vSphere: ICM v6.5 course to help provide additional information about the course and cost compared to Stanly Community College.


I highly recommend reading the certification overview provided by VMware before moving forward with your decision. Each situation is unique, and the information provided will serve as an asset when determining which training option to pursue, including the certification you have in mind. For example, the prerequisites for the VCP-DCV6 exam can be found here. Additionally, the requirements vary for first-time exam takers or if you don’t currently hold a VCP certification. As of February 4, 2019, VMware has removed the mandatory two-year recertification requirements that allows you to upgrade and recertify.


There are multiple training options in addition to the choices I listed above, and, in some cases, they’re free of cost and are available to you anytime, anywhere. The formats include hands-on labs, vBrownBag, blogs, and various podcasts.


Hands-on labs are a fantastic resource because they permit you to test and evaluate VMware products in a virtual environment without needing to install anything locally. VMware products include Data Center Virtualization, Networking and Security, Storage and Availability, and Desktop and Application Virtualization, to name a few. Additionally, this provides you with the opportunity to confirm which product you’re interested in pursuing for the respective certification for without making a financial commitment for the required training course.


vBrownBag is all about the community! Its purpose is to help one another succeed through contributions made by fellow community members. In the end, it’s all about #TEAM.


Blogs include the following contributors: Melissa Palmer, Daniel Paluszek, Gregg Robertson, Lino Telera, Chestin Hay, Cody Bunch, and many others.


Podcasts include the following contributors: Simon Long, Pete Flecha and John Nicholson, VMware Communities Roundtable, and many more.


Let’s examine the pros and cons of pursing a certification.




Used to measure a candidate’s willingness to work hard and meet goals

Cost (out of pocket vs. employer)

IT certifications are required for certain job openings (may assist you with obtaining a desired position as an applicant or current employee)

IT certifications are required for certain job openings (may prevent you from obtaining a desired position as an applicant or current employee)

Certifications are used to confirm subject-matter expertise

Time (balancing certification training with a full-time job or family responsibilities)

Companies save on training costs if they hire a certified candidate

Certifications may not be considered valuable if you don’t have the experience to back them up

Certifications increase your chances of receiving a promotion or raise

Test vs. Business needs/situations may not be aligned

Certifications ensure you’re up to date on the latest best practices



As you can see, multiple resources are available to help you succeed in pursuit of your certification, including the wonderful contributors in the #vCommunity.

In part one of this series, I tried to apply context to the virtualization market—its history, how it’s changed the way we deploy infrastructure, and how it’s developed into the de facto standard for infrastructure deployment.


Over the last 20 years, virtualization has had a very positive impact in enabling us to deliver technology in a smarter, quicker way, bringing more flexibility to our infrastructure and giving us the ability to meet the demands placed upon it much more effectively.


But virtualization, or maybe more specifically, server virtualization, is now a very traditional technology designed to operate within our data centers. While it’s been hugely positive, it’s created its own set of problems within our IT infrastructures.


Server Sprawl

Perhaps the most common problem we witness is server sprawl. The original success of server virtualization came from allowing IT teams to reduce the waste caused by over-specified, underused, and expensive servers filling racks in their data centers.


This quickly evolved as we recognized how virtualization allowed us to deploy new servers much more quickly and efficiently. Individual servers for specific applications no longer needed new hardware to be bought, delivered, racked, and stacked. This made it an easy decision to build them for every application. However, this simplicity has created an issue like what we’d tried to solve in the first place. The ease of creating virtual machines meant instead of dealings with tens of physical servers, we had hundreds of virtual ones.



The impact of virtual server sprawl introduced a wide range of challenges, from the simple practicality of managing so many servers, to securing them, networking them, backing them up, building disaster recovery plans, and even understanding why some of those servers exist to better control VM sprawl. Having an infrastructure at such a scale also becomes cumbersome, reducing the benefits of flexibility and agility that made virtualization such a powerful and attractive technology shift in the first place.



With size comes complexity. Multiple servers, large storage arrays, and complex networking design make implementation more difficult, as of course does the management and protection of more complex environments.


The complexity highlighted in the physical infrastructure environment is also mirrored in the application infrastructure it supports. Increasingly, enterprises struggle to fully understand the complexity of their application stack. Dependencies being unclear and concerns changing to one part of the infrastructure could have unknown impacts across the rest. Complex environments come with increased risk of failures, security breaches, running costs, and are slower and more difficult to change, innovate, and respond to demands placed upon it.


The Evolution of Virtualization

With all of this said, the innovation that gave us virtualization technology in the first place hasn’t stopped. The challenges growing virtual environments have created have been recognized. Significant advances in virtualization management now allow us better control across the entire virtual stack. Innovations like hyperconverged infrastructure (HCI) are making increasingly integrated and software-driven hardware, networking, and storage elements much more readily available to our enterprises.


In the remaining parts of this series, we are going to look at the evolution of virtualization technologies and how continual innovation now means virtualization has moved far beyond the traditional world of server virtualization, becoming more software-driven, with increased integration with the cloud, improved management, better delivery at scale, and adoption of innovative deployment methods, all to ensure virtualization continues to deliver to our environments the flexibility, agility, innovation, and speed of delivery that has made it such a core component of today's enterprise IT environment.

At VMworld this week in San Francisco and enjoying the cooler weather. After three straight years in Las Vegas, it's a nice change. The truth is, this event could be held anywhere, because the #vCommunity is filled with good people who are fun to be around.


As always, here are some links I hope you find interesting. Enjoy!


Major breach found in biometrics system used by banks, UK police and defence firms

"As a precaution, please reset your fingerprints and face, thank you."


California law targets biohacking and DIY CRISPR kits

Not sure we need laws against this or not; after all, we don't have laws against do-it-yourself-dentistry. But we certainly could use some education regarding the use of biohacking and your overall health.


Apple warns its credit card doesn't like leather or denim or other cards

Any other company would be laughed out of existence. With Apple, we just laugh, and then pay thousands of dollars for items we don't need.


VMware is bringing VMs and containers together, taking advantage of Heptio acquisition

One of the many announcements this week, as VMware is looking to help customers manage the sprawl created by containers and Kubernetes.


The surprisingly great idea in Bernie Sanders’s Green New Deal: electric school buses

This is a good idea, which is why it won't happen.


Hackers are actively trying to steal passwords from two widely used VPNs

Please, please, please patch your systems. Stop making excuses. You can provide security AND meet your business SLAs. Is it hard? Yeah. Impossible? Nope.


Company that was laughed offstage sues Black Hat

Well, now I'm laughing, too. You can't expect to get on stage in front of a deeply technical audience, use a bunch of made-up words and marketing-speak, and be taken seriously.


This is my fifth consecutive VMworld, and back in the same city as the first. Lots of memories for me and my journey with the #vCommunity


Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s the second part of the article from my colleague Jim Hansen about two key assets the government can use to help with cybersecurity.


Technology: Providing the Necessary Visibility


Cybersecurity personnel can’t succeed without the proper tools, but many of them don’t have the technology necessary to protect their agencies. According to the Federal Cybersecurity Risk Determination Report and Action Plan, 38% of federal cyberincidents didn’t have an identified attack vector. Per the report, IT professionals simply don’t have a good grasp of where attacks are originating, who or what is causing them, or how to track them down.


Part of this is due to the heavily siloed nature of federal agencies. The DoD, for example, has many different arms working with their own unique networks. It can be nearly impossible for an Air Force administrator to see what’s going on with the Army’s network, even though an attack on one could affect the entire DoD infrastructure. Things become even more complicated when dealing with government contractors, some of whom have been behind several large security breaches, including the infamous Office of Personnel Management security breach in 2014.


Some of it is due to the increasing complexity of federal IT networks. Some networks are hosted in the public cloud, while others are on-premises. Still, others are of a hybrid nature, with some critical applications being housed on-site, while others are kept in the cloud.


Regardless of the situation, agency administrators must have complete visibility into the entirety of the network for which they are responsible. Technology can provide this visibility, but not the garden-variety network monitoring solutions agencies used 10 years ago. The complexity of today’s IT infrastructures requires a form of “network monitoring on steroids.” Administrators need to be able to effectively police any type of network—distributed, on-premises, cloud, or hybrid—and provide unfettered visibility, alerts, and forensic data to help administrators quickly trace an event back to its root cause.


Administrators must have a means of tracing activity across boundaries, so they can have just as much insight into what’s happening at their cloud provider as they do in their own data center. Further, they must be able to monitor their data as it passes between these boundaries to ensure information is protected both at rest and in-flight. This is especially critical for those operating hybrid cloud environments.


None of this should be considered a short-term fix. It can take the government a while to get things going—after all, many agencies are still trying to conform to the National Institute of Standards and Technology’s password guidelines. That’s OK, though; the fight for good cybersecurity will be ongoing, and it will be incumbent upon agencies to evolve their strategies and tactics to meet these threats over time. The battle begins with people, continues with technology, and will ultimately end with the government being able to more effectively protect its networks.


Find the full article on Fifth Domain.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

If you’re in tune with the various tech communities, you’ve probably noticed a big push for professional development in the technical ranks. I love the recognition that IT pros need more than just technical skills to succeed, but most of the outreach has been about improving one’s own stature in the tech hierarchy. There’s surprisingly little focus on those who are happy in their place in the world and just want to make the world a little better. What the heck am I talking about? It’s not just us as individuals who need to grow and improve; our IT organizations need to evolve as well. Perhaps we need an example scenario to help make my point…


“Once upon a time, I was part of an amazing team made up of talented people, with a fantastic manager. Our camaraderie was through the roof. We had all the right people on the bus. Ideas were plentiful. We also couldn’t get anything of consequence done in the organization...


“It didn’t make sense to me at the time, as this group of rock stars should have been able to get anything done. In the end, there were a host of contributing issues, but one of the biggest was our own making. The long and short of it is, we didn’t play nice with others. We were the stereotypical IT geeks, and our standoffish behavior isolated us within the org. It’s not a fun place to be, tends to be self-replicating, didn’t help move the org forward, and ultimately was detrimental to us as individuals.”


Even in 2019, it doesn’t seem to be an unfamiliar story for IT practitioners. Today I’d like to take a few minutes exploring some thoughts on how we can fight the norms and evolve as an IT department to become an even more integral part of the business.


How to Improve IT Departments – Get Out of IT


Quick! Tell me, how does your organization make money? Seriously, ask yourself this question. Unless you work for a non-profit, this is the ultimate goal for your business: to make money. If the question stumped you at all, I’d like to ask you one more. If you don’t know the ultimate goal for your business, how can you truly understand your place within the business and how best to work towards its success? You can’t. No matter the size of the machine, you have to understand what the pieces within it do and how they engage with each other to move the machine forward. IT is just one piece within your business, and you need to go learn about the other pieces, their needs, and friction points.


The only real answer is to broaden your horizons and get out of IT.


When I say, “Get out of IT,” I’m being literal. Leave. Get out. Go talk to people. Depending on where you are, the mechanics of getting out of IT is going to take different forms. In smaller organizations, the HR department may be your best resource to figure out the key players to talk to, whereas larger organizations may have formal shadowing, apprenticeships, or even job rotation programs. If that’s all too involved for you, spending a little more time on the intranet reading up on other teams will still pay dividends in developing a better understanding of your stakeholders.


While You’re Out and About – How IT Pros Can Learn From Other Departments


Listen. Show empathy. Is it that simple? Yeah, it really is. The act of getting out of IT is not just about learning and gaining information, it’s also about building relationships. The most important takeaway from this exercise is building inter-department relationships.


One of the easiest and most effective ways to build relationships is to listen to the other party. I’m not talking about just waiting for a point to interject or to solve the problems in your head while they’re talking. Don’t practice selective or distracted listening, but be present, focus on the person, and try to hear what they’re saying. It’s not easy to do. After all, we live in a distracted society and many of us make our livelihoods by trying to solve problems as efficiently as possible. For me, I find active listening can help significantly with overcoming my inattentiveness.


Before moving on, I want to point out one specific word: empathy. I chose it specifically, in part, because of how Stephen Covey defines empathetic listening, “…it’s that you fully, deeply, understand that person, emotionally as well as intellectually.” This is not white belt level attentiveness we’re talking about; this is Buddha-like listening, and with practice and intentionality, you may find you’re able to reach this level of enlightenment. By doing so, you’ll inevitably forge bonds, and the relationship you build will be based on mutual understanding. With these bonds in place, you and your new cohorts will be more in sync and better able to row in the same direction.


The Importance of Asking “Why?”


Why are we here? Not quite as existential as that, but fundamental nonetheless, you should consider adding the word “Why?” to your workplace repertoire. “Why?” Well, let me tell you why. This simple three-letter word will help you peel back the layers of the onion. Asked in the right way, it can be a powerful means to demonstrate your empathy and leverage the newly strengthened relationships you’ve built. It’s a means to get deeper insight to the problem/pain/situation at hand. With deeper insights, you can create more effective solutions.


Now a word of caution for you, burgeoning Buddha. “Why?” can also backfire on you. It can be a challenging word. By using the word “Why” in the wrong context, setting, or situation, you can present a challenge to the questioned. If you’re talking to someone who can get defensive and put their shields up, it’s possible to lose traction. Tread carefully with this powerful little word. Ensure your newly improved relationships have a good bond and can be trusted if this word is interpreted in the wrong way. Long story short, make sure you’re being intentional with your communications, and asking “Why?” can take you a long way towards becoming more effective in your organization.


Why Can’t We Be Friends? (With IT)


The idea for this post has been kicking around for a while, but the title only came to me recently when the Rage Against the Machine song “Know Your Enemies” unexpectedly started playing in my car. If you’re able to strip out the controversial elements, the song is about a call to action, fighting against complacency, and bucking the norms. At the end of this post, that’s my hope for you: by being cognizant of our place in the organization and actively working to build better relationships, you’ll walk away not raging against the machine, but rather humming “why can’t we be friends, why can’t we be friends…

So much in IT today is focused on the enterprise. At times, smaller organizations get left out of big enterprise data center conversations. Enterprise tools are far too expensive and usually way more than what they need to operate. Why pay more for data center technologies beyond what your organization needs? Unfortunately for some SMBs, this happens, and the ROI on the equipment they purchase never really realizes its full potential. Traditional data center infrastructure hardware and software can be complicated for an SMB to operate alone, creating further costs for professional services for configuration and deployment. This was all very true until the advent of hyperconverged infrastructure (HCI). So to answer the question I posed above: yes, HCI is very beneficial and well suited for most SMBs. Here's why:


1. The SMB is small- to medium-sized – Large enterprise solutions don’t suit SMBs. Aside from the issue of over-provisioning and the sheer cost of running an enterprise data center solution, SMBs just don't need those solutions. If the need for growth arises, with HCI, an organization can easily scale out according to their workloads.


2. SMBs can’t afford complexity – A traditional data center infrastructure usually involves some separation of duties and siloes. Where there are so many moving parts with different management interfaces, it can become complex to manage it all without stepping on anyone’s toes. HCI offers an all-in-one solution—storage, compute, and memory all contained in a singular chassis. HCI avoids the need for an SMB to employ a networking team, virtualization team, storage team, and more.


3. Time to market is speedy – SMBs don’t need to take months to procure, configure, and deploy a large-scale solution for their needs. Larger corporations might require a long schedule, where the SMB might not require this. HCI helps them to get to market quickly. HCI is as close to a plug-and-play data center as you can get. In some cases, depending on the vendor chosen, time to market can be down to minutes.


4. Agility, flexibility, all of the above – SMBs need to be more agile and don’t want to carry all the overhead required to run a full data center. Power, space, and cooling can be expensive when it comes to large enterprise systems. Space itself can be a very expensive commodity. Depending on the SMB’s needs, their HCI system can be trimmed down to a single rack or even a half rack. HCI is also agile in nature due to the ability to scale on demand. If workloads spike overnight, simply add another block or node to your existing HCI deployment to bring you the performance your workloads require.


5. Don’t rely on the big players – Big-name vendors for storage and compute licensing can come at a significant cost. Some HCI vendors offer proprietary and built-in hypervisor solutions included in the cost and easier to manage than an enterprise license agreement. Management software is also built in to many HCI vendor’s solutions.


HCI has given the SMB more choices when it comes to building out a data center. In the past, an SMB had to purchase licensing and hardware generally built for the large enterprise. Now they can purchase a less expensive solution with HCI. HCI offers agility, quick time to market, cost savings, and reduced complexity. These can all be pain points for an SMB, which can be solved by implementing an HCI solution. If you work for an SMB, have you found this to be true? Does HCI solve many of these problems?

This is the second of a series of five posts on the market space for hyperconverged infrastructure (HCI).


To be clear, as the previous blog post outlined, there are many options in this space. Evaluation of your needs and clear understanding of why you want to choose a solution shouldn’t be made lightly. Complete understanding of what you hope to accomplish and why you wish to go with one of these solutions should be evaluated and understood, and this information should guide you in your decision-making process.


Here’s a current listing of industry players in hyperconverged.

  • Stratoscale
  • Pivot3
  • DellEMC Vx series
  • NetApp
  • Huawei
  • VMware
  • Nutanix
  • HPE SimpliVity
  • HyperGrid
  • Hitachi Data Systems (Ventara)
  • Cisco
  • Datrium


There are more, but these are the biggest names today.


Each technology falls toward the top of the solutions set required by the Gartner HCI Magic Quadrant (MQ). The real question is: which is the right one for you?


Questions to Ask When Choosing Hyperconvergence Vendors


Organizations should ask lots of questions to determine what vendor(s) to pursue. Those questions shouldn’t be based on the placement in the Gartner MQ, but rather your organization’s direction, requirements, and what’s already in use.


You also shouldn’t ignore the knowledge base of your technical staff. For example, I wouldn’t want to put a KVM-only hypervisor requirement in the hands of a historically VMware-only staff without understanding the learning curve and potential for mistakes. Are you planning on using virtual machines or containers? There are considerations to this. What about a cloud element? While most architectures support cloud, you should ask what cloud platform and what applications will you be using?


One of the biggest variables to be considered is and always should be backup, recovery, and DR. Do you have a plan in place? Will your existing environment support this vendor’s approach? Do you believe you’ve evaluated the full spectrum of how this will be done? The elements to set one platform apart are how the storage in the environment handles tasks like replication, deduplication, redundancies, fault tolerance, encryption, and compression. In my mind, the concern as to how this is handled, and how it might be able to integrate into your existing environment, must be considered.


I’d also be concerned about how the security regulations your organization faces are considered in the architecture of your choice. Will that affect the vendor you choose? It can, but it may not even be relevant.


I would also be concerned about the company’s track record. We assume Cisco, NetApp, or HPE will be around, as they’ve been there with support and solutions for decades. To be fair, longevity isn’t the only method for corporate evaluation, but it’s a very reasonable concern when it comes to supporting the environment, future technology breakthroughs, enhancements, and maybe the next purchase, should it be appropriate.


Now, my goal here isn’t to make recommendations, but to urge readers to narrow down a daunting list, and then evaluate the features and functions most relevant to your organization. Should a true evaluation be undertaken, my recommendation would be to do some research into the depth of your company’s needs, and those that can be resolved by placing a device or series of them in your environment.


The decision can last years, change the direction of how your virtual servers exist in your environment, and shouldn’t be undertaken lightly. That said, Hyperconverged Infrastructure has been one of the biggest shifts in the market over the last few years. 

Getting ready for VMworld next week in San Francisco. If you're attending, please stop by the booth and say hello. I have some speaking sessions as well as a session in the expo hall. Feel free to come over and talk data or bacon.


As always, here are some links I hope you find interesting. Enjoy!


Supercomputer creates millions of virtual universes

Another example of where quantum computers will help advance research beyond what supercomputers of today can provide.


Amazon's facial recognition mistakenly labels 26 California lawmakers as criminals

This depends on what your definition of "mistake" is.


Younger Americans better at telling factual news statements from opinions

I'd like to see this survey repeated, but with a more narrow focus on age groups. I believe grouping 18-49 as "young" is a bit of a stretch.


Attorney General Barr and Encryption

Good summary of the talking points in the debate about backdoors and encryption.


Loot boxes a matter of "life or death," says researcher

As a parent I have seen firsthand how loot boxes affect children and their habits.


Black Hat: GDPR privacy law exploited to reveal personal data

I wish I had attended this talk at Black Hat, brilliant research into how data privacy laws are making us less safe than we may have thought.


He tried to prank the DMV. Then his vanity license plate backfired big time.

NULLs remain the worst mistake in computer science.


I have walked past, but never into, the Boston Public Library many times. Last week I took the time to go inside and was not disappointed.


Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Jim Hansen about the always interesting topic of cybersecurity. He says it comes down to people and visibility. Here’s part one of his article.


The Trump administration issued two significant reports in the last couple of months attesting to the state of the federal government’s cybersecurity posture. The Federal Cybersecurity Risk Determination Report and Action Plan noted 74% of agencies that participated in the Office of Management and Budget’s and Department of Homeland Security’s risk assessment process have either “at risk” or “high risk” cybersecurity programs. Meanwhile, the National Cyber Strategy of the United States of America addressed steps agencies should take to improve upon the assessment.


Together, the reports illustrate two fundamental factors instrumental in combating those who would perpetrate cybercrimes against the U.S. Those factors—people and the technology they use—comprise our government’s best defense.


People: the First Line of Defense


People develop the policies and processes driving cybersecurity initiatives throughout the government. Their knowledge—about the threat landscape, the cybersecurity tools available for government, and the security needs and workings of their own organizations—are essential to running a well-oiled security apparatus.


But finding those skilled individuals, and keeping them, is difficult. Since the government is committed to keeping taxpayers’ costs low, agencies can’t always afford to match the pay scales of private sector companies. This leaves agencies at a disadvantage when attempting to attract and retain skilled cybersecurity talent to help defend and protect national security interests.


Several education initiatives are underway to help with this cyberskills shortage. The National Cyber Strategy report lays out some solid ideas for workforce knowledge improvement, including leveraging merit-based immigration reforms to attract international talent, reskilling people from other industries, and more. Meanwhile, the Federal Cyber Reskilling Academy provides hands-on training to prepare non-IT professionals to work as cyberdefense analysts.


Hiring processes must also continue to evolve. Although there has been progress within the DoD, many agencies still adhere to an approach dictated by stringent criteria, including years of experience, college degrees, and other factors. This effectively puts workers into boxes—this person goes in a GS-7 pay grade box, and this other person in a GS-15.


While education and experience are both important, so are ideas, creativity, problem-solving, and a willingness to think outside the box. It’s a shame those attributes can’t be considered just as valuable, especially in a world where security professionals are continually being asked to think on their feet and combat an enemy who both shows no mercy and evolves quickly to bypass an organization’s defenses. The government needs people who can effectively identify and understand a security event, react quickly in the case of an event, respond to the event, anticipate the next potential attack, and formulate the right policies to prevent future incidents.


(to be continued next week)


Find the full article on Fifth Domain.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

In the previous blog, we discussed how defining use cases mapped to important security and business- related objectives are the first step in building and maintaining a secure environment. We’ve all heard the phrase, “you can’t defend what you can’t see,” but, you also “can’t defend what you don’t understand.”


Use cases will be built with the components of the ecosystem, so it’s critical to identify them early. Overlooking a key component could prove to be costly later. The blueprint for use case deployment can be drawn up based on three key areas.


1. Identify Role Players, Such as Endpoints, Infrastructure, and Users

A network is built using devices such as routers, switches, firewalls, and servers. Their optimal configuration and deployment requires a detailed understanding of the role each device will play in connecting users with applications and services securely and efficiently as mandated by their individual and group roles.

User and endpoint roles can then be mapped to enforcement techniques, authentication and access methods, and security audit requirements during the deployment phase.


For example:

  • Campus-based employees may access the network via wired company-owned devices and authorized for network access by MAB authentication
  • Mobile employees require 802.1X authentication via wireless network access
  • Guest users with their own wireless devices use Web Authentication and are authorized to access a restricted set of resources
  • Branch office connectivity and other remote access users connect via an IPsec VPN with authentication via IKEv2 with RSA signatures or EAP
  • Network administrator groups require access to subsets of devices, authenticate per device, and are authorized for specific commands


2. Understand Data Flows and Paths

The role players in the ecosystems are connected by data flows. Where do these flows need to go within the network? Where are users coming from? Flows need to be defined. This includes flows between users and services, as well as administrative flows between network infrastructure (think routing protocols, AAA requirements, log consolidation, etc.). As a data flow traverses a known path, identify network transit points such as remote access, perimeter, and even on- or off-premises communications with cloud-based services.


Understanding how data moves through the ecosystem raises questions on how to secure it.


  • Should a user be granted the same level of access regardless of their point of access?
  • Should data segmentation be implemented, and if so, will physical and/or logical segmentation by used?
  • Should all services be located inside my firewall perimeter, on a DMZ, or cloud hosted?
  • What types and levels of authentication, integrity, and privacy will be required to secure data flows?


3. Identify Software and Hardware, SaaS, and Cloud

Effective configuration and deployment of network elements is dictated by required functions and permitted traffic flows, which in turn drive the choice of hardware and software. Device capabilities shouldn’t define a security policy, although they may enhance it. Choosing products that don’t meet security or business needs is a sure way to limit effectiveness.


Knowing what you need is critical. However, one major influence on security policy is business return on investment. When possible, consider migration strategies using existing infrastructure to support newer features and a more secure design. Relocation of hardware to different areas of the network or simply upgrading a device in terms of adding memory to accommodate new software versions should always be considered. Deprecating on-premises hardware may be considered if a transition to cloud-based services is seen as a more efficient and cost-effective method of meeting security and business objectives.


When selecting new hardware, plan for future growth in terms of device capacity (bandwidth), performance (processor, memory), load balancing/redundancy capabilities, and flexibility (static form-factor versus expansion slots for additional modules). Set realistic and well-researched performance goals to ensure stability and predictability and choose the best way to implement them.


When selecting software, in addition to providing the required functionality, the following points should be considered.

  • Evaluate standards-based versus vendor proprietary features.
  • If certified products are required, is the vendor involved with certification efforts and committed to keeping certifications up to date?
  • Does the software provide for system hardening and performance optimization (such as control-plane policing and system tuning parameters) and system/feature failover options?
  • Understand performance trade-offs when enabling several features applied to the same traffic flows. Multiple devices may be required to provide all feature requirements.
  • Is the vendor committed to secure coding practices and responsive to addressing vulnerabilities?


After building an ecosystem blueprint, how can we be sure its deployment supports appropriate deployment, design, and security principles? The next blog will look at the role of best practices, industry guidelines, and compliance requirements.

Anyone who’s hired for a technical team can understand this scenario.


You’re hiring for “X” positions, so of course you receive “X” times infinity applications. Just because you’re looking for the next whizbang engineer, it doesn’t mean you get to neglect your day job. There are still meetings to attend and emails to write, never mind looking after the team and the thing you’re hiring for anyway! So, what do you do? Most people want to be as efficient as possible, so they jump right to the experience and skills section of the resume. That’s what you’re hiring for, right? Technical teams require technical skills. All the rest is fluff.


If you’ve done this, and I bet you have if you listen to the little voice in the back of your head, then you may be doing a disservice to your team, the candidate, and ultimately yourself. What the heck am I talking about? Yup, the “soft skills,” and today I’d like to talk primarily about communication skills.


I can hear the eyerolls from here, so let’s spend a few minutes talking about why they’re so important.


I’m a <insert technologist label here>. Why do I care about communications?

In 2019, continuous deployment pushes code around the clock; security event managers stop bad guys before we realize they’re there; and even help desk solutions where tickets can manage themselves. Yet even with these technological and automative advances, people want relationships. If you think about it, when you really need to get anything complex done, do you ask your digital personal assistant, or do you work with another human? We’re out for human interaction. By building relationships, we can move IT further away from the reputation of “the Department of No” and toward frameworks and cultures to enable things like DevOps and becoming a true business partner. Our ability to communicate builds bridges and becomes the foundation for our relationships.


Still skeptical? Let’s walk through another scenario.

Assume you’re an architect in a large organization and you’ve got an idea to revolutionize your business. You manage to get time with the person who controls the purse strings and you launch right into what this widget is and what you need for resources. You wow them with your technical knowledge and assure them this widget is necessary for the organization to succeed. Is this a recipe for success? Probably not. You might even get bounced out of the office on your ear.


Let’s replay the same scenario a little bit differently. You get time on the decisionmaker's calendar; but you do a little homework first. You ask your colleagues about the decisionmaker and what type of leader they are. You dig into what their organizational goals are and how they might be measured against said goals. Armed with this information, you frame the conversation in terms of the benefits delivered both to the organization and the purse holder. And you can speak their language, which they’ll most likely appreciate and will make your conversation go that much smoother. Due to your excellent communication skills, your project is approved, you have a new BFF, and you both go get tacos to celebrate your impending world domination.


Neither the world domination nor the tacos would be possible without the ability to convey benefits to the recipient in a language they understand. The only difference between world domination and coming across like a self-righteous nerd who cares more about their knobs than the organization is the ability to clearly and succinctly communicate with the business in a language they understand.


So now that we’ve talked a bit about why...

Let’s circle back to the original premise for a moment: you should be building communication skills into your teams. Obviously if you’re hiring a technical writer, communication is the skill, but chances are you’re looking for someone who has an attention to detail and can write some form of prose. The ability to craft a narrative will be vital if you’re looking for a technical marketing person. Anyone who’s in a help desk role needs to build rapport, so communicating with empathy and understanding becomes vital. If you’re hiring for an upper level staff position, the ability to distill highly technical concepts down to fundamentals and convey them in language that makes sense to the recipient is paramount. In my experience, this last example can be a bit of a rarity; if you find someone either within or outside your ranks who exudes it, you should think about how you can keep them on your hook.


How do you achieve this unicorn dream of hiring for communication skills? Classic geek answer, “it depends.” We can’t possibly diagnose all the permutations in my wee little blog post. Rather than try to give you a recipe, I think you’ll find by shifting your approach slightly, to be more mindful of what you’d like to achieve via your communications, you’ll inherently be more successful.


One last point before I bid you adieu. Here, we’ve focused on why you need to hire for these skills. This isn’t to say for one second that you shouldn’t also build them within your existing organizations. This, however, requires looking at the topic from some different angles and a whole other set of techniques, so we’ll leave it for another day. Until then, I hope you found this communication helpful, and I’d love to turn it into a dialogue if you’re willing to participate in the comments below.

In over 20 years I’ve spent in the IT industry, I’ve seen many changes, but nothing has had a bigger impact as virtualization.


I remember sitting in a classroom back in the early 2000s with a server vendor introducing VMware. They shared how they enabled us to segment underused resources on Intel servers into something called a "virtual machine" and then running separate environments in parallel on the same piece of hardware—technology witchcraft.


This led to a seismic shift in the way we built our server infrastructure. At the time, our data centers and computer rooms were full of individual, often over-specified and underutilized servers running an operating system and application, all of which were consuming space, power, cooling, and costing a small fortune. Add in projects taking months to deploy, meaning projects were often slow, reducing innovation, and elongating the response to business demands.


Virtualization revolutionized this, allowing us to reduce server estates and lower the cost of data centers and infrastructure. This allowed us to deploy new applications and services more quickly, letting us better meet the needs of our enterprise.


Today, virtualization is the de facto standard for how we deploy server infrastructure. It once being an odd, cutting-edge concept is hard to believe. While it’s the standard deployment model, the reasons we virtualize have changed over the last 20 years. It’s no longer about resource consolidation—it’s more about simplicity, efficiency, and convenience.


But, virtualization (especially server-based) is also a mature technology designed for Intel servers, running Windows and Linux inside the sacred walls of our data center. While it has served us well in making our data centers more efficient and flexible, reducing cost and ecological impact, does it still have a part to play in a world rapidly moving away from these traditional ways of working?


Over the next few weeks, we’ll explore virtualization from where we are today, the problems virtualization has created, where it’s heading, and whether it remains relevant in our rapidly changing technology world.


In this series, we’ll discuss:

  • The Problems of Virtualization – Where are we today and what problems 20 years of virtualization have caused, such as management, control, and VM sprawl.

  • Looking Beyond Server Virtualization – When you use the phrase virtualization, our thoughts immediately turn to Intel Servers, hypervisors, and virtual machines. But the future power of virtualization lies in changing the definition. If we think of it as abstracting the dependency of software from specific hardware, it opens a range of new opportunities.

  • Virtualization and the Drive to Infrastructure as Code – How the shift to a more software-defined world is going to cement the need for virtualization. Environments reliant on engineered systems are inflexible and slow to deploy. They’re going to become less useful and less prevalent. We need to be able to deploy our architecture in new ways and deliver it rapidly, consistently, and securely.

  • The Virtual Future – As we desire increasing agility, flexibility, and portability across our infrastructure and need more software-defined environments, automation, and integration with public cloud, then virtualization, (although maybe not in the traditional way we think of it), is going to play a core part in our futures. The more our infrastructure is software, the more ability we’ll have to deliver the future so many enterprises demand.


I hope you’ll join me in this series as I look at the changing face of virtualization and the part it’s going to play in our technology platforms today and in the future.


Had a wonderful time at Black Hat last week. Next up for me is VMworld in two weeks. If you're reading this and attending VMworld, stop by the booth and say hello.


As always, here are some links I hope you find interesting. Enjoy!


Hospital checklists are meant to save lives — so why do they often fail?

Good article for those of us that rely on checklists, and how to use them properly.


A Framework for Moderation

Brilliant article to help make sense of why content moderation is not as easy as we might think.


With warshipping, hackers ship their exploits directly to their target’s mail room

If you don't have the ability to detect rogue devices joining your network, you're at risk for this attack vector.


Uber, losing billions, freezes engineering hires

That's a lot of money disappearing. Makes me wonder where it's going, because it's not going to the drivers.


Study: Electric scooters aren’t as good for the environment as you think

Oh, maybe Uber is paying millions for research articles to be published. Just kidding. Uber offers scooters as well, as they remain dedicated to making things worse for everyone.


Robot, heal thyself: scientists develop self-repairing machines

What's the worst that can happen?


The World’s Largest and Most Notable Energy Sources

I enjoyed exploring this data set, and I think you might as well. For example, current energy consumption for bitcoin is about 60,000 megawatt hours. That's almost the same daily amount as the entire city of London.


We brought custom black hats to Black Hat, of course. We also brought photobombs by Dez, apparently.


Omar Rafik, SolarWinds, Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Mav Turner about machine learning and artificial intelligence. These technologies have recently become vogue and Mav does a great job of exploring them and their impacts on federal networking.


Machine learning (ML) and artificial intelligence (AI) could very well be the next major technological advancements to change the way federal IT pros work. These technologies can provide substantial benefits to any IT shop, particularly when it comes to security, network, and application performance.


Yet, while the federal government has been funding research into AI for years, most initiatives are still in the early stages. The reason for this is preparation. It’s critically important to prepare for AI and machine learning technologies—adopting them in a planned, purposeful manner—rather than simply applying them haphazardly to current challenges and hoping for the best.


Definition and Benefits of AI


From a high-level perspective, machine learning-based technologies create and enhance algorithms that identify patterns in large sets of data. Artificial intelligence is the ability for machines to continuously learn and apply cross-domain information to make decisions and act.


For example, AI technology allows computers to automatically recognize a threat to an agency’s infrastructure, automatically respond, and automatically thwart the attack without the assistance of the IT team.


But, remember, planning and preparation are absolutely necessary before agencies dive into AI implementation.


Preparing for AI


From a technology perspective, it’s important to be sure agency data is ready for the shift. Remember, machine learning technologies learn from existing data. Most agencies have data centers full of information—some clean, some not. To prepare, the absolute first step is to clean up the agency’s data store to ensure bad data doesn’t lead to bad automatic decisions.


Next, prepare the network by implementing network automation. Network automation will allow federal IT pros to provision a large number of network elements, for example, or automatically enhance government network performance. A good network automation package will provide insights—and automated response options—for fault, availability, performance, bandwidth, configuration, and IP address management.


Finally, strongly consider integrating any information that isn’t already integrated. For example, integrating application performance data with network automation software can automatically enhance performance. In fact, this integration can go one step further. Integrating historical data will allow the system to predict an impending spike in demand and automatically increase bandwidth levels or enable the necessary computing elements to accommodate the spike. While network automation is powerful and can start your organization down the path to prepare for AI-enabled solutions, there’s a big difference between network automation and AI.




This type of preparation provides two of the most essential elements of a successful AI implementation: efficiency and visibility. If an agency’s network isn’t already as efficient as it can be—and if the different elements of the infrastructure are not already linked—advanced technologies won’t be anywhere near as effective as they could be if the pieces were already in place, ready to take the agency to the next level.


Find the full article on our partner DLT’s blog Technically Speaking.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.