1 2 3 Previous Next

Geek Speak

2,798 posts

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Jim Hansen about the always interesting topic of cybersecurity. He says it comes down to people and visibility. Here’s part one of his article.

 

The Trump administration issued two significant reports in the last couple of months attesting to the state of the federal government’s cybersecurity posture. The Federal Cybersecurity Risk Determination Report and Action Plan noted 74% of agencies that participated in the Office of Management and Budget’s and Department of Homeland Security’s risk assessment process have either “at risk” or “high risk” cybersecurity programs. Meanwhile, the National Cyber Strategy of the United States of America addressed steps agencies should take to improve upon the assessment.

 

Together, the reports illustrate two fundamental factors instrumental in combating those who would perpetrate cybercrimes against the U.S. Those factors—people and the technology they use—comprise our government’s best defense.

 

People: the First Line of Defense

 

People develop the policies and processes driving cybersecurity initiatives throughout the government. Their knowledge—about the threat landscape, the cybersecurity tools available for government, and the security needs and workings of their own organizations—are essential to running a well-oiled security apparatus.

 

But finding those skilled individuals, and keeping them, is difficult. Since the government is committed to keeping taxpayers’ costs low, agencies can’t always afford to match the pay scales of private sector companies. This leaves agencies at a disadvantage when attempting to attract and retain skilled cybersecurity talent to help defend and protect national security interests.

 

Several education initiatives are underway to help with this cyberskills shortage. The National Cyber Strategy report lays out some solid ideas for workforce knowledge improvement, including leveraging merit-based immigration reforms to attract international talent, reskilling people from other industries, and more. Meanwhile, the Federal Cyber Reskilling Academy provides hands-on training to prepare non-IT professionals to work as cyberdefense analysts.

 

Hiring processes must also continue to evolve. Although there has been progress within the DoD, many agencies still adhere to an approach dictated by stringent criteria, including years of experience, college degrees, and other factors. This effectively puts workers into boxes—this person goes in a GS-7 pay grade box, and this other person in a GS-15.

 

While education and experience are both important, so are ideas, creativity, problem-solving, and a willingness to think outside the box. It’s a shame those attributes can’t be considered just as valuable, especially in a world where security professionals are continually being asked to think on their feet and combat an enemy who both shows no mercy and evolves quickly to bypass an organization’s defenses. The government needs people who can effectively identify and understand a security event, react quickly in the case of an event, respond to the event, anticipate the next potential attack, and formulate the right policies to prevent future incidents.

 

(to be continued next week)

 

Find the full article on Fifth Domain.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

In the previous blog, we discussed how defining use cases mapped to important security and business- related objectives are the first step in building and maintaining a secure environment. We’ve all heard the phrase, “you can’t defend what you can’t see,” but, you also “can’t defend what you don’t understand.”

 

Use cases will be built with the components of the ecosystem, so it’s critical to identify them early. Overlooking a key component could prove to be costly later. The blueprint for use case deployment can be drawn up based on three key areas.

 

1. Identify Role Players, Such as Endpoints, Infrastructure, and Users

A network is built using devices such as routers, switches, firewalls, and servers. Their optimal configuration and deployment requires a detailed understanding of the role each device will play in connecting users with applications and services securely and efficiently as mandated by their individual and group roles.

User and endpoint roles can then be mapped to enforcement techniques, authentication and access methods, and security audit requirements during the deployment phase.

 

For example:

  • Campus-based employees may access the network via wired company-owned devices and authorized for network access by MAB authentication
  • Mobile employees require 802.1X authentication via wireless network access
  • Guest users with their own wireless devices use Web Authentication and are authorized to access a restricted set of resources
  • Branch office connectivity and other remote access users connect via an IPsec VPN with authentication via IKEv2 with RSA signatures or EAP
  • Network administrator groups require access to subsets of devices, authenticate per device, and are authorized for specific commands

 

2. Understand Data Flows and Paths

The role players in the ecosystems are connected by data flows. Where do these flows need to go within the network? Where are users coming from? Flows need to be defined. This includes flows between users and services, as well as administrative flows between network infrastructure (think routing protocols, AAA requirements, log consolidation, etc.). As a data flow traverses a known path, identify network transit points such as remote access, perimeter, and even on- or off-premises communications with cloud-based services.

 

Understanding how data moves through the ecosystem raises questions on how to secure it.

 

  • Should a user be granted the same level of access regardless of their point of access?
  • Should data segmentation be implemented, and if so, will physical and/or logical segmentation by used?
  • Should all services be located inside my firewall perimeter, on a DMZ, or cloud hosted?
  • What types and levels of authentication, integrity, and privacy will be required to secure data flows?

 

3. Identify Software and Hardware, SaaS, and Cloud

Effective configuration and deployment of network elements is dictated by required functions and permitted traffic flows, which in turn drive the choice of hardware and software. Device capabilities shouldn’t define a security policy, although they may enhance it. Choosing products that don’t meet security or business needs is a sure way to limit effectiveness.

 

Knowing what you need is critical. However, one major influence on security policy is business return on investment. When possible, consider migration strategies using existing infrastructure to support newer features and a more secure design. Relocation of hardware to different areas of the network or simply upgrading a device in terms of adding memory to accommodate new software versions should always be considered. Deprecating on-premises hardware may be considered if a transition to cloud-based services is seen as a more efficient and cost-effective method of meeting security and business objectives.

 

When selecting new hardware, plan for future growth in terms of device capacity (bandwidth), performance (processor, memory), load balancing/redundancy capabilities, and flexibility (static form-factor versus expansion slots for additional modules). Set realistic and well-researched performance goals to ensure stability and predictability and choose the best way to implement them.

 

When selecting software, in addition to providing the required functionality, the following points should be considered.

  • Evaluate standards-based versus vendor proprietary features.
  • If certified products are required, is the vendor involved with certification efforts and committed to keeping certifications up to date?
  • Does the software provide for system hardening and performance optimization (such as control-plane policing and system tuning parameters) and system/feature failover options?
  • Understand performance trade-offs when enabling several features applied to the same traffic flows. Multiple devices may be required to provide all feature requirements.
  • Is the vendor committed to secure coding practices and responsive to addressing vulnerabilities?

 

After building an ecosystem blueprint, how can we be sure its deployment supports appropriate deployment, design, and security principles? The next blog will look at the role of best practices, industry guidelines, and compliance requirements.

Anyone who’s hired for a technical team can understand this scenario.

 

You’re hiring for “X” positions, so of course you receive “X” times infinity applications. Just because you’re looking for the next whizbang engineer, it doesn’t mean you get to neglect your day job. There are still meetings to attend and emails to write, never mind looking after the team and the thing you’re hiring for anyway! So, what do you do? Most people want to be as efficient as possible, so they jump right to the experience and skills section of the resume. That’s what you’re hiring for, right? Technical teams require technical skills. All the rest is fluff.

 

If you’ve done this, and I bet you have if you listen to the little voice in the back of your head, then you may be doing a disservice to your team, the candidate, and ultimately yourself. What the heck am I talking about? Yup, the “soft skills,” and today I’d like to talk primarily about communication skills.

 

I can hear the eyerolls from here, so let’s spend a few minutes talking about why they’re so important.

 

I’m a <insert technologist label here>. Why do I care about communications?

In 2019, continuous deployment pushes code around the clock; security event managers stop bad guys before we realize they’re there; and even help desk solutions where tickets can manage themselves. Yet even with these technological and automative advances, people want relationships. If you think about it, when you really need to get anything complex done, do you ask your digital personal assistant, or do you work with another human? We’re out for human interaction. By building relationships, we can move IT further away from the reputation of “the Department of No” and toward frameworks and cultures to enable things like DevOps and becoming a true business partner. Our ability to communicate builds bridges and becomes the foundation for our relationships.

 

Still skeptical? Let’s walk through another scenario.

Assume you’re an architect in a large organization and you’ve got an idea to revolutionize your business. You manage to get time with the person who controls the purse strings and you launch right into what this widget is and what you need for resources. You wow them with your technical knowledge and assure them this widget is necessary for the organization to succeed. Is this a recipe for success? Probably not. You might even get bounced out of the office on your ear.

 

Let’s replay the same scenario a little bit differently. You get time on the decisionmaker's calendar; but you do a little homework first. You ask your colleagues about the decisionmaker and what type of leader they are. You dig into what their organizational goals are and how they might be measured against said goals. Armed with this information, you frame the conversation in terms of the benefits delivered both to the organization and the purse holder. And you can speak their language, which they’ll most likely appreciate and will make your conversation go that much smoother. Due to your excellent communication skills, your project is approved, you have a new BFF, and you both go get tacos to celebrate your impending world domination.

 

Neither the world domination nor the tacos would be possible without the ability to convey benefits to the recipient in a language they understand. The only difference between world domination and coming across like a self-righteous nerd who cares more about their knobs than the organization is the ability to clearly and succinctly communicate with the business in a language they understand.

 

So now that we’ve talked a bit about why...

Let’s circle back to the original premise for a moment: you should be building communication skills into your teams. Obviously if you’re hiring a technical writer, communication is the skill, but chances are you’re looking for someone who has an attention to detail and can write some form of prose. The ability to craft a narrative will be vital if you’re looking for a technical marketing person. Anyone who’s in a help desk role needs to build rapport, so communicating with empathy and understanding becomes vital. If you’re hiring for an upper level staff position, the ability to distill highly technical concepts down to fundamentals and convey them in language that makes sense to the recipient is paramount. In my experience, this last example can be a bit of a rarity; if you find someone either within or outside your ranks who exudes it, you should think about how you can keep them on your hook.

 

How do you achieve this unicorn dream of hiring for communication skills? Classic geek answer, “it depends.” We can’t possibly diagnose all the permutations in my wee little blog post. Rather than try to give you a recipe, I think you’ll find by shifting your approach slightly, to be more mindful of what you’d like to achieve via your communications, you’ll inherently be more successful.

 

One last point before I bid you adieu. Here, we’ve focused on why you need to hire for these skills. This isn’t to say for one second that you shouldn’t also build them within your existing organizations. This, however, requires looking at the topic from some different angles and a whole other set of techniques, so we’ll leave it for another day. Until then, I hope you found this communication helpful, and I’d love to turn it into a dialogue if you’re willing to participate in the comments below.

In over 20 years I’ve spent in the IT industry, I’ve seen many changes, but nothing has had a bigger impact as virtualization.

 

I remember sitting in a classroom back in the early 2000s with a server vendor introducing VMware. They shared how they enabled us to segment underused resources on Intel servers into something called a "virtual machine" and then running separate environments in parallel on the same piece of hardware—technology witchcraft.

 

This led to a seismic shift in the way we built our server infrastructure. At the time, our data centers and computer rooms were full of individual, often over-specified and underutilized servers running an operating system and application, all of which were consuming space, power, cooling, and costing a small fortune. Add in projects taking months to deploy, meaning projects were often slow, reducing innovation, and elongating the response to business demands.

 

Virtualization revolutionized this, allowing us to reduce server estates and lower the cost of data centers and infrastructure. This allowed us to deploy new applications and services more quickly, letting us better meet the needs of our enterprise.

 

Today, virtualization is the de facto standard for how we deploy server infrastructure. It once being an odd, cutting-edge concept is hard to believe. While it’s the standard deployment model, the reasons we virtualize have changed over the last 20 years. It’s no longer about resource consolidation—it’s more about simplicity, efficiency, and convenience.

 

But, virtualization (especially server-based) is also a mature technology designed for Intel servers, running Windows and Linux inside the sacred walls of our data center. While it has served us well in making our data centers more efficient and flexible, reducing cost and ecological impact, does it still have a part to play in a world rapidly moving away from these traditional ways of working?

 

Over the next few weeks, we’ll explore virtualization from where we are today, the problems virtualization has created, where it’s heading, and whether it remains relevant in our rapidly changing technology world.

 

In this series, we’ll discuss:

  • The Problems of Virtualization – Where are we today and what problems 20 years of virtualization have caused, such as management, control, and VM sprawl.

  • Looking Beyond Server Virtualization – When you use the phrase virtualization, our thoughts immediately turn to Intel Servers, hypervisors, and virtual machines. But the future power of virtualization lies in changing the definition. If we think of it as abstracting the dependency of software from specific hardware, it opens a range of new opportunities.

  • Virtualization and the Drive to Infrastructure as Code – How the shift to a more software-defined world is going to cement the need for virtualization. Environments reliant on engineered systems are inflexible and slow to deploy. They’re going to become less useful and less prevalent. We need to be able to deploy our architecture in new ways and deliver it rapidly, consistently, and securely.

  • The Virtual Future – As we desire increasing agility, flexibility, and portability across our infrastructure and need more software-defined environments, automation, and integration with public cloud, then virtualization, (although maybe not in the traditional way we think of it), is going to play a core part in our futures. The more our infrastructure is software, the more ability we’ll have to deliver the future so many enterprises demand.

 

I hope you’ll join me in this series as I look at the changing face of virtualization and the part it’s going to play in our technology platforms today and in the future.

 

Had a wonderful time at Black Hat last week. Next up for me is VMworld in two weeks. If you're reading this and attending VMworld, stop by the booth and say hello.

 

As always, here are some links I hope you find interesting. Enjoy!

 

Hospital checklists are meant to save lives — so why do they often fail?

Good article for those of us that rely on checklists, and how to use them properly.

 

A Framework for Moderation

Brilliant article to help make sense of why content moderation is not as easy as we might think.

 

With warshipping, hackers ship their exploits directly to their target’s mail room

If you don't have the ability to detect rogue devices joining your network, you're at risk for this attack vector.

 

Uber, losing billions, freezes engineering hires

That's a lot of money disappearing. Makes me wonder where it's going, because it's not going to the drivers.

 

Study: Electric scooters aren’t as good for the environment as you think

Oh, maybe Uber is paying millions for research articles to be published. Just kidding. Uber offers scooters as well, as they remain dedicated to making things worse for everyone.

 

Robot, heal thyself: scientists develop self-repairing machines

What's the worst that can happen?

 

The World’s Largest and Most Notable Energy Sources

I enjoyed exploring this data set, and I think you might as well. For example, current energy consumption for bitcoin is about 60,000 megawatt hours. That's almost the same daily amount as the entire city of London.

 

We brought custom black hats to Black Hat, of course. We also brought photobombs by Dez, apparently.

 

Omar Rafik, SolarWinds, Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Mav Turner about machine learning and artificial intelligence. These technologies have recently become vogue and Mav does a great job of exploring them and their impacts on federal networking.

 

Machine learning (ML) and artificial intelligence (AI) could very well be the next major technological advancements to change the way federal IT pros work. These technologies can provide substantial benefits to any IT shop, particularly when it comes to security, network, and application performance.

 

Yet, while the federal government has been funding research into AI for years, most initiatives are still in the early stages. The reason for this is preparation. It’s critically important to prepare for AI and machine learning technologies—adopting them in a planned, purposeful manner—rather than simply applying them haphazardly to current challenges and hoping for the best.

 

Definition and Benefits of AI

 

From a high-level perspective, machine learning-based technologies create and enhance algorithms that identify patterns in large sets of data. Artificial intelligence is the ability for machines to continuously learn and apply cross-domain information to make decisions and act.

 

For example, AI technology allows computers to automatically recognize a threat to an agency’s infrastructure, automatically respond, and automatically thwart the attack without the assistance of the IT team.

 

But, remember, planning and preparation are absolutely necessary before agencies dive into AI implementation.

 

Preparing for AI

 

From a technology perspective, it’s important to be sure agency data is ready for the shift. Remember, machine learning technologies learn from existing data. Most agencies have data centers full of information—some clean, some not. To prepare, the absolute first step is to clean up the agency’s data store to ensure bad data doesn’t lead to bad automatic decisions.

 

Next, prepare the network by implementing network automation. Network automation will allow federal IT pros to provision a large number of network elements, for example, or automatically enhance government network performance. A good network automation package will provide insights—and automated response options—for fault, availability, performance, bandwidth, configuration, and IP address management.

 

Finally, strongly consider integrating any information that isn’t already integrated. For example, integrating application performance data with network automation software can automatically enhance performance. In fact, this integration can go one step further. Integrating historical data will allow the system to predict an impending spike in demand and automatically increase bandwidth levels or enable the necessary computing elements to accommodate the spike. While network automation is powerful and can start your organization down the path to prepare for AI-enabled solutions, there’s a big difference between network automation and AI.

 

Conclusion

 

This type of preparation provides two of the most essential elements of a successful AI implementation: efficiency and visibility. If an agency’s network isn’t already as efficient as it can be—and if the different elements of the infrastructure are not already linked—advanced technologies won’t be anywhere near as effective as they could be if the pieces were already in place, ready to take the agency to the next level.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Security is a key operational consideration for organizations today because a breach can lead to significant losses of revenue, reputation, and legal standing. An entity’s environment is an ecosystem comprised of users, roles, networking equipment, systems, and applications coming together to facilitate productivity and profitability as securely as possible. An environment will never be 100% secured against all threats. The next best option is to be proactive to defend against known attacks and to provide real-time, adaptable monitoring capabilities to detect and alert on behaviors outside of what are considered normal in the environment.

 

This blog series will present suggestions and guidelines for building and maintaining an environment for administrators to defend against and mitigate threats.

 

Security is no longer just an overlay to a network topology. Security methods provide protection for data, access, and infrastructure, and should be defined and deployed based on a carefully defined security policy. An effective security policy integrates well-known protection methods into a network in a way that meets both security standards and the business goals of the entity being secured. This is facilitated by defining use cases representing key business drivers, such as:

 

  • Improved efficiency through streamlined security processes reducing operational expenses in terms of time, money, and personnel
  • Increased productivity through well-defined and applied policies correctly balancing the level of access with perceived risk
  • Better agility allowing for efficiency with respect to the implementation of compliance and regulatory objectives, migration strategies, and risk mitigation techniques.

 

Identifying use cases is often the catalyst for a security policy review. Remember, each entity within an organization will have its own objectives. Even if things look typical on the surface, to sell the security policy, its benefits must be apparent to each stakeholder.

 

Here are some common use cases and relevant details a security policy should outline.

  • Performance and Availability
    • SLA requirements
    • Capacity and potential growth
    • Efficient use of bandwidth and device resources
    • Planning for redundant designs
  • Audit and Logging
    • Compliance or legal requirements
    • Compliance demonstration during audits
    • Granularity of monitoring and control
      • Per user
      • Command level
    • Detect suspicious behavior of log sources
    • React to expected host/log sources not reporting
    • Installation of agents on endpoints or collectors
    • Consolidation of log sources for a single view
  • Monitor/troubleshoot
    • What is the cost of downtime?
    • Acquisition and placement of management tools
    • What key events need to be highlighted?
    • Application of analytics, rulesets, and alerts
    • Escalation chain to handle alerts and incident response
    • Automated controls versus user intervention
    • Issue reporting mechanisms and management protocols
    • Support costs: in-house, outsourced
  • Asset Provisioning
    • Centralized repository versus per-device
    • Need for multiple levels of control
    • Automation of distribution
    • Change management processes
    • Vulnerability assessment strategy
  • Acceptable Use Monitoring
    • Employee monitoring
    • Analyzing user behavior to detect potentially suspicious patterns
    • Analyzing network traffic to pinpoint trends indicating potential attacks
    • Identifying improper user account usage, such as shared accounts
    • Publishing policies for the use of the organization’s resources
    • Develop a baseline document to outline threshold limits, critical resources information, user roles, and policies, and apply this to a monitoring system, service, or playbook
    • Legally acceptable method of handling breaches
  • Threat Playbook
    • Identify the threats and attacks of concern (could be industry-specific):
      • Detecting data exfiltration by attackers
      • Detecting insider threats
      • Identifying compromised accounts
      • Detection of brute force attacks
      • Application defense checks
      • Malware checks and update process
      • Detection of anomalous ports, services, and unpatched hosts/network devices
      • Incident investigation process
    • Proactive threat hunting
    • Engaging legal entities and incident response personnel

 

In summary, a security policy builds the foundation for a secure network, but it must be valuable and enforceable to an organization and all stakeholders.

 

In the next blog in this series, we’ll look at how use cases can be mapped to the components in the environment.

We’ve all been there. It’s time to consider building a home lab, whether it’s for testing a scenario, preparing for a certification, or learning more about a software application. There are two home lab options to consider.

 

A physical home lab includes a rack server, servers, networking equipment, monitor, keyboard, KVM, and so on. Additionally, the rack server requires a space tall and wide enough to house it, as well as sufficient power to run it.

 

A software-defined home lab (virtualized) eliminates most of the items needed in a physical lab environment. There’s no need for a rack server, servers, or space, and the power consumption is significantly less. A software-defined home lab can run off a single NUC, a desktop, or laptop computer. The hardware requirements (RAM, storage, processor) can vary depending on your home lab.

 

This is where we’ll dig deeper into the question asked in the title of this post: what is virtualization?

 

Virtualization provides an alternative to a physical environment because it allows the end user to create a software-based (virtualized) model of a server with additional servers, applications, and networks in a software-defined manner. Additionally, time savings are an important factor. VM templates are lifesavers because if you’re not satisfied with the virtual environment you’ve created, you can delete it and start over using the template and you’ll be up and running in half the time it would take to rebuild a physical server in the same manner.

 

There are five commonly known use cases to virtualize:

 

Server – allows multiple operating systems to run on one physical server

Storage – provides the ability to combine multiple physical storage options into a single logical storage environment

Network – provides the ability to create multiple software-defined networks (SDN) in a virtualized manner

Desktop – like server, but allows the end user to create and deploy multiple virtualized desktops onto a single desktop computer accessible from any device

Application – a prime use case scenario when you need to host an application for testing

 

The options to create a virtualized environment have also expanded to include public cloud services but some factors to consider are based on need. For short-term projects with a limited budget requiring a VM stood up in seconds, a public cloud service is ideal. This provides flexibility without the overhead, but keep in mind these services can be easily adopted because of the simplicity, and there’s a tendency to neglect the costs associated with them over time. If a public cloud service is a long-term solution, it’s even more important for you to keep track of costs for the same reasons listed above, perhaps with email alerts to keep track of each option selected with corresponding cost.

Hyperconverged infrastructure is adopted more and more every year by companies both big and small. For small- to medium-businesses (SMBs) and remote branch type companies, HCI provides an all-in-one solution to eliminate the need to have power, space, and cooling for multiple different systems. For bigger companies at the enterprise level and below, HCI gives them a flexible means of scaling out as the workload demand increases. There are many other benefits to adopting a hyperconverged approach to data center infrastructure. Here are what I consider the top five.

 

Flexible Scalability

The most attractive benefit of HCI is the ability to scale out your environment as your workload demand increases. Every application or data center has different workloads. Some demand a lot of resources while others don’t need as much. In the case of HCI, as workload demands increase, there’s no need to do a forklift upgrade. You simply add a new node or block to your current infrastructure and configure it as needed.

 

A Lower Total Cost of Ownership

Total Cost of Ownership (TCO) isn’t usually something the IT community really cares about. Generally, we’re given what we get, and we have to make it work effectively. Decision makers find TCO extremely important, and this metric plays a major role in the procurement process. TCO is good for both the decision makers and the engineers in the trenches. On one hand, the decision makers see a lower cost solution overall, and the engineers get to work with equipment and software that isn’t completely budget-restrained. Time to market is much quicker, less administrative staff is required, and there are cost savings when it comes to power, space, and cooling.

 

Less Administrative Overhead

When there’s less to manage, there’s less administrative overhead. Lowering administrative overhead doesn’t necessarily mean cutting down your staff, it means making your staff more effective with less work. In a traditional data center infrastructure, there are many moving parts, which requires many different teams or silos. Consolidating all the moving parts into one chassis cuts down on administrative overhead and segregated teams.

 

Avoid Overprovisioning

HCI essentially eliminates the possibility of overprovisioning. Traditional infrastructures require a big purchase upfront, which is based off an analyst’s projections for the workload three to five years out. Most of those workload projections end up being too generous and the organization is left with expensive hardware that never gets near capacity usage. HCI allows you to buy only what you currently need, with maybe a 15% buffer, and then as the workload increases, you can flexibly scale out to meet the increase. By eliminating the need for workload projection and large upfront hardware/software purchases, TCO decreases and flexibility increases.

 

Takes the Complexity Out of New Deployments

Traditional infrastructure deployments can be a very complex process. As discussed earlier, the workload projections move forward to procurement of hardware and software, which then need to be racked and stacked, cabled, and configured. This deployment approach requires multiple people, server lifts, and rack space. HCI deployments make it easier for less experienced administrators to unbox rack and configure an entire deployment. In the case of scaling, the same deployment model exists… unbox and bolt on a new block and configure it. There’s no need for the storage team to provision new storage, or the networking team to add more cables. It’s an all-in-one solution and simple to deploy.

 

There are numerous benefits to adopting HCI over traditional infrastructure. I’ve only listed five, which barely scratch the surface. For the benefit of those of us who haven’t deployed HCI before, what are some benefits you’ve realized by your deployment, but aren’t on this list?

Heading to Las Vegas this week for Black Hat. In preparation, I'm bringing a burner phone, wrapping it and my laptop in foil, and then burning them both when I head to the airport to leave.

 

As always, here are some links I hope you find interesting. Enjoy!

 

Woman arrested after Capital One hack spills personal info on 106 million credit card applicants

Secure your S3 buckets, y'all. This is a known attack vector, highlighted here as a "configuration vulnerability."

 

What We Can Learn from the Capital One Hack

Good summary of details regarding the "configuration vulnerabilities" existing within the open source code deployed by Capital One.

 

GitHub sued for aiding hacking in Capital One breach

This seems to be a stretch, but it's interesting to note. I'm not certain how GitHub is supposed to recognize leaked data is being stored (it could be fake data), or how they should verify code is secure.

 

Computer Science Curriculums Must Emphasize Privacy Over Capability

I like the idea, but don't think it's enough. Because most of the folks working in IT aren't CS majors, maybe we should have all fields of study include basic privacy and security information, too.

 

Google’s File on You is 10 Times Bigger Than Facebook’s — Here’s How to View It

In case you were wondering about the data Google is tracking as you surf the web.

 

All the best engineering advice I stole from non-technical people

A bit long, but worth your time.

 

NASA has created food out of thin air and it could be the solution to global hunger

Seems promising, but you'll have my full attention when you create bacon from thin air.

 

Got tired of mowing grass between the newly planted shrubs, so we built a new border path. At this rate, we won't have any grass to mow by 2021.

 

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Mav Turner about the need for federal employees to develop soft skills like communication and adaptability. I agree if folks want to get ahead in their careers today, they need strong skills like these.

 

Today’s federal IT pro has a broad range of responsibilities and corresponding talents. The need for a multidisciplinary skillset is only increasing. As environments get more complex and teams grow, federal IT pros will need a broader range of skills—specifically, “soft skills” or “people skills.” These types of additional skills have the potential to help solidify job security and help make the federal IT pro more invaluable.

 

Soft skill requirements

 

Communication, collaboration, and adaptability are the cornerstones of a strong, productive team—hence, three of the most desirable “soft skills” the federal IT pro can develop and grow.

 

Communication

 

When a project is created—and must be planned, tested, and executed—it’s critically important to have the ability to communicate project goals, strategy, planning, timelines, testing, implementation, and ongoing maintenance to everyone on the team, regardless of technical specialty.

 

Each group within the team should have an understanding of the criticality of the project, as well as its nontechnical goals as it relates to the agency’s mission. Communication is the key to achieving this goal. Federal IT pros must be able to communicate not only within the team but to others within the organization as well.

 

Most agencies have some combination of technical folks and business folks. A technical staffer who can explain how a technical project will help drive agency mission or business goals will likely have a successful career. Additionally, a technical staffer who can also explain the financial impact—ideally, the long-term cost-savings impact of many of today’s leading-edge technology projects—is likely to have an even more successful career.

 

Collaboration

 

Different agency groups will need to work together to ensure project success. According to the 2019 SolarWinds® Federal Cybersecurity Survey Report, a majority of security issues are born of user error. In fact, 56% of respondents say careless untrained insiders are a significant source of IT security threats in their agencies.

 

Based on those statistics, if an agency wants to enhance its security posture, collaborating with the rest of the agency will likely be a critical component of the project’s success.

 

The federal IT team can work with the agency’s internal communications team to implement an awareness or education program to ensure all agency personnel are informed—and are doing their part in the broader agency effort.

 

Adaptability

 

As every federal IT pro knows, change is constant. Whether the change is related to budget issues, administration changes, technology advancements, or a combination of all three, the ability to function in this type of changing environment is becoming increasingly important.

 

Critical thinking skills are a significant part of adaptability. As things change, federal IT pros must be able to shift thinking quickly and effectively. Critical thinking skills include problem identification, research, objective analysis, and the ability to draw conclusions and make decisions.

 

The final piece of adaptability is the willingness to change. It’s important to embrace change and thrive in this type of environment. A willingness—even eagerness—to learn new technologies, take on new challenges, and think differently will almost certainly ensure a long, successful career in the federal IT workforce.

 

Conclusion

 

Gone are the days of silo-based IT skills. Communication, collaboration, and adaptability will soon be job requirements within the federal IT workforce, even for the most technical staffers.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

This is the first of a five-part series on HCI. I hope you enjoy and this prompts discussion. Please feel free to reach out to me, either here or via Twitter at @MBLeib.

 

What Is Hyperconverged Infrastructure?

Hyperconverged infrastructure is one of the new hot areas of technology in the IT data center space. Like most areas of technology, there are the marketing words and then there’s the definition. So, what’s the definition? There’s no industry-wide meaning, but in my opinion, it involves the management of an architecture based on hardware, software, storage, and a hypervisor. As this audience is familiar with hypervisors, I’m happy to skip the “what is a hypervisor” conversation, but in certain cases the architecture supports VMware but not KVM, Hyper-V, or some flavor of software. In other cases, the architectures will support all of them. Let’s be clear here, though: I believe the original concept of this category is all built around the hypervisor, hence the term “hyperconverged.”

 

I don’t believe it involves a compute/storage environment, but without the hypervisor. So, for example, in the backup space, Rubrik and Cohesity, with no disrespect, are converged but not hyperconverged. And, believe me, there are many advantages in the converged arena as well, but by my definition, this isn’t that. I lay no claim, by the way, to the veracity of my definition.

 

The history of such an architecture goes back to the launch of the EMC/Cisco product, the vBlock. The idea when this was created was of a compute environment powered by VMware and Cisco servers (UCS), a switched environment powered by Cisco Nexus, and, of course, storage by EMC. The product was chosen by size requirements. Your compute engagement would be built around supporting the VMware load, and the storage would involve all your storage requirements. Seems easy, right? It wasn’t. These were first-generation builds and required much in the way of fine-tuning and technical support. At the same time, NetApp introduced their answer to this with the FlexPod. These were first-generation products, and though they were built quite robustly, they were tougher to manage than ever intended.

 

Soon came the launch of products from Nutanix and SimpliVity designed around industry standard x86, and initially a shared spinning-disc storage environment with a virtual SAN architecture spread across nodes. This became a far more viable build, with sizing around three or four node x86 clusters. Scalability was initially difficult, as once you outgrow your sizing or your storage, the requirement would be to spend on a full cluster once again.

 

Alternative builds arrived on the scene from brands like Datrium, and NetApp, VMware VxRail, as well as others, which had the idea of using storage nodes and compute nodes as separate components. This gave the customer far friendlier ways in which to grow the architecture. No longer were you limited by the storage/compute limits. If you needed more storage, you’d place a storage node into the cluster, and if you needed compute, well, that was easy as well. I find these architectures compelling.

 

As you can see, there are many approaches to resolve a converged architecture, with varying approaches designed to solve a variety of inherent issues. With so many options to draw from when pursuing this option, your data center needs will be likely resolved by one of these.


I’d also like to stress, as has always been my opinion, that the idea of convergence may not be appropriate for some scenarios. Orchestration elements have become far more sophisticated, such that “pools” of resources can be provisioned from the whole using a variety of methods, depending on the hardware to be leveraged. Sizing, needs, scalability, and other variables can be used to achieve either the same or similar goals. The build of servers, fabric, storage and network are still viable options. Also, a potential need can be solved by using a newer version of the converged architectures available, as HPE has done with the fully managed Synergy architecture.

 

Before endeavoring to implement an approach, be sure that your goals are being met by the solutions you pursue.

Had a great stay-cation last week. I made no plans except a quick overnight trip to the beach. It was wonderful doing nothing, catching up on sleep, and enjoying our backyard space. I highly recommend everyone find the time to do nothing; your body, mind, and spirit will thank you.

 

As always, here are some links I hope you find interesting. Enjoy!

 

Why the WhatsApp Security Flaw Should Make Enterprise IT Nervous

WhatsApp may be the most flawed application out there right now, owned by a company (Facebook) known to have shoddy security practices. If you are using this app, you are putting yourself, your friends, and your company network at risk.

 

Netflix: 105 Mil Have Watched One ‘Orange Is the New Black’ Episode

Buried inside this story is the reason I included this link: Netflix is losing customers. This is the story to track over the next 24 months. Netflix has a lot of data, and a lot of smart people. I can't imagine this is the end, but likely a pivot.

 

You’re very easy to track down, even when your data has been anonymized

Privacy is an illusion, a lie we tell ourselves every day.

 

Louisiana declares state of emergency after ransomware attacks

"It'll get worse before it gets better." - Dalton

 

Amazon dominates IaaS cloud services market, small enterprises lose out

No shock here, but AWS is the market leader in IaaS, followed by Azure. But many are surprised to find that Google is 4th, behind Alibaba. I'm certain they exist, but I don't know any company outside of Silicon Valley that uses GCP for production purposes.

 

Why the dockless scooter industry is going after a repossessor and a bike shop owner

SPOILER ALERT: A DotCom company didn't care how their business would affect anything other than money generated. By advertising the scooters can be "left anywhere," these companies have created a nuisance. I'm glad to see people standing up to the stupid.

 

Quantum Supremacy Is Coming: Here’s What You Should Know

Long, but good summary of quantum computing for those that haven't taken a dive into those waters yet. I view quantum supremacy as the moment when quantum computing is powerful enough to render all current encryption useless.

 

"These go to eleven."

 

Happy, honored, and humbled to have been awarded the Microsoft MVP for the 11th consecutive year.

 

The summer is full of important dates, from national holidays to family vacations to birthdays and anniversaries big and small.

 

In a few short days, one such birthday is coming up—an event noted and even celebrated by people across the globe. I’m speaking, of course, about July 31—Harry Potter’s birthday.

 

In considering the legacy of the Harry Potter stories, there are many lessons for the IT practitioner. Examples include:

  • The importance of robust physical security of our most precious on-premises assets, like data and philosopher’s stones
  • The need for security protocols to detect and trap bugs within the system
  • How a strong core team with diverse skills can help overcome threats both big and small

 

But one lesson stands out for me, here in the days after news broke about the latest internet fiasco, FaceApp. I’ve written before about the many poor choices made by social media companies and app developers – especially when it comes to security, privacy, and transparency. On a personal note, because of those concerns, I left the Facebook platform completely about a year ago.

 

With those two things out in the open, I’d like to suggest that, of all the Harry Potter characters, it’s the humble but capable Mr. Weasley who exemplifies both how we got to this point, and how we might make better choices in the future.

 

As for how we got here: of all the people we meet in the Potterverse, it’s Arthur Weasley who most strongly embraces technology. From his tricked-out Ford Anglia to his willingness to try using “stitches” as part of his recovery from a near-fatal snake bite, Arthur’s enthusiastic openness to innovation and alternative solutions puts him on the cutting edge within the wizard community.

 

But, as his obsession with collecting plugs (and his fascination with things that run on “eckeltricity,” as he calls it) shows, he often doesn’t fully understand how the technology he’s so captivated by works. I’m sure anyone who has worked on a help desk for more than 15 minutes can tell similar stories.

 

While this lack of understanding doesn’t lead to any serious consequences for Mr. Weasley—and thankfully, the same can be said for most end users in most organizations on most days—we who work in the IT trenches can certainly see where the dangers lie. And it explains how FaceApp, and similar breaches over the past few years, happen; and keep happening; and happen seemingly overnight (I say “seemingly” because FaceApp itself has existed since 2017 and this was not its first controversy). Like Arthur Weasley, some folks are open to new things, and willing to enthusiastically embrace advances allowing them to live on the cutting edge. But their lack of familiarity with the underlying technology causes them to misunderstand the risks.

 

And all of this leads up to why I think it’s so wonderfully ironic for Mr. Weasley himself to give the simple, yet effective lesson on how to keep our digital lives safe in these uncertain times.

“What have I always told you? Never trust anything that can think for itself if you can’t see where it keeps its brain?”

J.K. Rowling, Harry Potter and the Chamber of Secrets

 

After discovering how his daughter has been pouring out her heart (and, it turns out, her life essence) all year to a sentient diary possessed by an evil wizard, Mr. Weasley offers up the commonsense rule we all should keep in mind when considering installing a shiny new app; clicking the funny online survey to see which type of dog you are; or tapping the mesmerizing button offering a download of the movie not yet out of theaters.

 

It’s why understanding where “it” keeps its brain—whether the “it” in question is an app or website or vendor—is so important. As we saw with Cambridge Analytica; Google listening to audio recorded by Google Home devices; weather apps selling user data to the highest bidder; a Facebook API bug exposed photos of 6.8 million users;  and now this latest issue with FaceApp, there is no reason to expect the industry to finally step up and be more careful.

 

For those reading this and fretting over whether it’s too much to ask simple end users to become expert technologists, I would underscore how the FaceApp issue wasn’t even where or how the data—the “brain”–was being kept. It was in the terms of service.

 

What I’m talking about is more than another case of the adage “if it seems too good to be true, it probably is.” It’s also the reality that (as another adage goes) “If you’re not paying for it, you’re not the customer, you’re the product.”

 

So, even if the end user can’t determine where it keeps its brain, we must always remember we know where WE keep OUR brain, and we should use it conscientiously before adding the next shiny new eckeltricity plug app, to our collection.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Jim Hansen with ideas for engaging agency staff to be part of the solution to security challenges. Insider threats have been a leading cause of breaches for as long as I can remember, and I like Jim’s approach of making everyone a security advocate.

 

The rising numbers of data breaches should come as no surprise to federal IT security pros who work every day to ensure agency information is secure. However, these breaches may not be something a federal IT team can prevent on its own.

 

According to the most recent SolarWinds Federal Cybersecurity Survey, more than 50% of respondents say careless or untrained users are the leading cause of data breaches across the federal government. Spam, malware, and social engineering are far and away the greatest threats; oftentimes end users unknowingly take actions that go against agency security policy or harm the network.

 

Three Steps to Stronger Security

 

While technology is generally the most solid defense against security threats, federal IT security pros should also take the following steps to improve agency security.

 

1. Start from the top. In any organization, leadership sets the tone. If all agency heads become security advocates, it will send a clear message on prioritizing security initiatives. Consider hosting a town-hall type meeting, or a “lunch and learn,” where leaders explain what’s at stake to encourage employees to take a more personal approach to security. Leadership can explain what they do to protect agency data while discussing the importance of agency policies and enforcement.

 

2. Provide solid user education. Security breach statistics consistently show that most attacks originate inside the organization, stemming from things like an employee falling victim to a phishing scheme or simple end-user errors that leave them, their identities, and their systems exposed. Provide simple, easy-to-follow education, direction, and training. Educate staffers on the implications of not following the training in a way specific to the agency. Give examples of the types of things to look for in phishing or socially engineered attacks. Flag security vulnerabilities that could be exacerbated by end-user activities, such as using agency email on a smartphone OS that requires a security patch or accessing a social media profile with a password that may have been part of a larger breach. The more the end user knows, the better.

 

3. Ensure security policies are fluid. Security threats change every day; policies that stay the same year after year are inherently outdated. Reassess policies every six to nine months to ensure the policies align with the changing threat landscape and risks to the agency so they’re as effective as possible. To encourage more end-user advocacy, establish two different security policies: one for the IT and security team, and one specifically for staff. And, be sure to update both often. This not only shows end-users the agency’s level of commitment, it will provide an opportunity for ongoing and continued education.

 

 

Remember, to enhance the agency’s security posture, security initiatives must be a priority for everyone—not just the IT team. More education and more participation will often lead to enhanced end-user engagement, and that’s the ultimate goal.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.