1 2 3 4 Previous Next

Geek Speak

2,837 posts

The first three blogs in this series were all about building a blueprint for a well-designed environment. In this article, we’ll review more practical considerations to influence the overall architecture and design of the ecosystem, which in turn require specific features and methodologies as dictated by the required data flows in our use cases. Many of these concepts may not appear to be focused on security but refer to basic networking and documentation. It’s surprising how many organizations lack a detailed knowledge of their underlying network design and the capabilities of its components, which is a must for the detection and mitigation of threats.


Logical design provides data segmentation, the first real step to a secure and resilient design. Sub-interfaces, VLANs, VRFs, virtual, and tunnel interfaces separate traffic and allow various forwarding and security methods to be applied to individual flows. Devices such as firewalls and intrusion prevention appliances may be physically connected to routers or switches; however, logical design features such as firewall contexts and virtual sensors handle the segmented flows.


Another key concept is the use of addresses and identifiers as the basis for implementing security policy rules and requirements. Address assignment methods should be controlled and secured. This is simpler for static assignment to resources that don’t change frequently. Dynamic assignment is required for transient users and devices and should be part of authentication and authorization of the end entity. If a DHCP server is used, secure the service using techniques such as DHCP snooping and dynamic ARP inspection. It’s important to track assigned addresses by associating MAC addresses to IPs to prevent spoofing. Using authentication methods such as 802.1X or MAB and tying them to device profiling and security posture assessments introduces the concept of authorized network access based not only on identity but also capabilities.


Allocated addresses may need to be translated to allow access to private services from a public network, or to hide the real address of a private resource. Be familiar with unidirectional versus bidirectional access when configuring NAT and PAT methods. If traffic flows are initiated using a translated address, a bidirectional method is required. This can also be combined with application port mapping, which forces connectivity via a non-standard port, hiding the real port. If address translation isn’t an option, but connectivity across a WAN or the internet to remote sites is required, consider tunneling methods such as IPv6-in-IPv4 or IPv4-in-IPv4, which may also be protected with IPsec. Role-based identifiers to add context to a security policy beyond topology dependent constructs such as IP address are also available. Some vendors offer identity-based firewalling, where username to IP address mapping is used to enforce policy.


Once end entities access the network, a solid understanding of routing protocols and packet forwarding techniques adds to overall security. Static routes can be used to redirect traffic for security reasons. Policy-based routing can also redirect or discard traffic as well as mark certain flows for priority handling. Be familiar with best practices for dynamic routing protocols as well as any security mechanisms associated with them. For example, authentication of routing updates via MD5 and TTL max hop limits for OSPF and BGP.


Understanding the services and functions important to network users and putting together a topology design helps define security policy elements. Enforcement techniques such as access lists, firewall rules, application security attack mitigations, and role-based access controls identify the security feature capabilities needed on network devices.


In keeping with best practices, several references are available to assist with secure design, including:


  • IETF standards-based BCPs (38), RFCs (1918, 3330, 2827, 3704)
  • Compliance best practices ISO Framework (27001, 27002), COBIT IT security standards
  • Well-known organization documents such as those by SANS and NIST
  • Vendor specific guidelines, recommendations, and limitations
  • Up-to-date vulnerability information from PSIRT, SNORT, and threat intel feeds


In the final blog of the series, we will review methods for monitoring and alerting that will be the barometer for measuring the success of our use case deployment.

I don't want to alarm you, but there are only 97 shopping days left until Christmas. Which explains why the local big-box hardware stores already have Christmas decorations out. I'm going to do my best to enjoy the wonderful autumn weather and not think about the snow, ice, and cold I know are heading my way.


As always, here are some links I hope you find interesting. Enjoy!


Mystery database left open turns out to be at heart of a huge Groupon ticket fraud ring

An interesting twist to the usual database-found-left-open-on-the-internet story.


LastPass fixes flaw that leaked your previously used credentials

If you are using LastPass, please update to the latest version.


3 Nonobvious Industries Blockchain is Likely to Affect

I'm not a fan of the Blockchain, but this article speaks about industries that make interesting use cases. Much more interesting than the typical food supply chain examples.


DNA Data Storage

It may be possible to fit all YouTube data in a teaspoon. That sounds great, but the article doesn't talk about how quickly you can retrieve this data.


Check the scope: Pen-testers nabbed, jailed in Iowa courthouse break-in attempt

Talk about having a bad day at work.


Amazon Quantum Ledger Database (QLDB) hits general availability

This is essentially a transaction log, one that grows forever in size, never gets backed up, and is never erased.


Now that MoviePass is dead, can we please start funding sensible businesses?

Probably not.


No, really, it's fine.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Mav Turner about the complexities of server and application management. Hyperconvergence makes the old question of, “is it the network or the application?” harder to answer; good thing we have tools to help.


According to a recent SolarWinds federal IT survey, nearly half of IT professionals feel their environments aren’t operating at optimal levels.


The more complex the app stack, the more servers are required—and the more challenging it can be to discover problems as they arise. These problems can range from the mundane to the alarming.


It can be difficult to determine the origin of the problem. Is it an app or a server? Identifying the cause requires being able to visualize the relationship between the two. To do this, administrators need more in-depth insights and visual analysis than traditional network monitoring provides.


The Relationship Between Applications and Servers


Today, applications and servers are closely entwined and can span multiple data centers, remote locations, and the cloud. Today’s virtualized environments make it harder to discern whether the error is the fault of the application or the server.


Administrators must be able to correlate communications processes between applications and servers. Essentially, to understand what applications and servers are “saying” to each other and monitor activities taking place between the two. This detailed understanding can help admins rapidly identify the cause of failures, so they can quickly respond.


Administrators should be able to monitor processes wherever they’re taking place—on-premises or in the cloud. As more agencies adopt hybrid IT infrastructures, keeping a close eye on in-house and hosted applications from a single dashboard will be imperative. Administrators need a complete view of their applications and servers, regardless of location, if they want to quickly identify and respond to issues.


A Deeper Level of Detail


Think of traditional monitoring as providing a broad overview of network operations and functionality. It’s like an X-ray taking a wide-angle view of an entire section of a person’s body, providing invaluable insights for detecting problems.


Application and server monitoring is more like a CT scan focusing on a particular spot and illuminating otherwise undetectable issues. Administrators can collect data regarding application and server performance and visualize specific application connections to quickly identify issues related to packet loss, latency, and more.


The Benefits of a Deeper Understanding


Greater visibility allows administrators to pinpoint the source of problems and respond faster. This can save time and headaches, freeing up time to work on more mission-critical tasks and move their agencies forward.


Gaining a deeper and more detailed understanding of the interdependencies between applications and servers, as well as overall application performance, can also help address network optimization concerns. Less downtime means a better user experience, and fewer calls into IT: a win-win for everyone.


Growing Complexity Requires an Evolution in Monitoring


Federal IT complexity will to continue to grow. App stacks will become taller, and more servers will be added.


Network monitoring practices must evolve to keep up with this complexity. A more complex network requires a deeper and more detailed government network monitoring approach with administrators looking closely into the status of their applications and servers. If they can gain this perspective, they’ll be able to successfully optimize even the most complex network architectures.


Find the full article on Government Computer News.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Hyperconverged infrastructure has become a widely adopted approach to data center architecture. With so many moving parts involved, it can be difficult to keep up with the speed at which everything evolves. Your organization may require you to be up to speed with the latest and greatest technologies, especially when they decide to adopt a brand-new hyperconverged infrastructure. The question then arises, how do you become a hyperconverged infrastructure expert? There are many ways to become an expert, depending on the technology or vendor your organization chooses. While there aren’t many HCI certifications yet, there are many certification tracks you can obtain to make you an HCI expert.


Storage Certifications

There are many great storage options for certification depending on which vendor your organization uses for storage. If you’re already proficient with storage, you’re a step ahead. The storage vendor isn’t nearly as important as the storage technologies and concepts. Storage networking is important and getting trained on its concepts will be helpful in your quest to become an HCI expert.


Networking Certifications

There aren’t many certifications more important than networking. I strongly believe everyone in IT should have at least an entry-level networking certification. Networking is the lifeblood of the data center. Storage networking also exists in the data center and gaining a certification or training in networking will help to build your expert status.


Virtualization Certifications

Building virtual machines has become a daily occurrence, and if you’re in IT, it’s become necessary to understand virtualization technology. Regardless of the virtualization vendor of choice, having a solid foundational knowledge of virtualization will be key in becoming an HCI expert. Most HCI solutions use a specific vendor for their virtualization piece of the puzzle, but some HCI vendors have proprietary hypervisors built in to their products. Find a virtualization vendor with a good certification and training roadmap to gain knowledge of ins and outs of virtualization. When it comes to HCI, you’ll need it.


HCI Training

If you already have a good understanding of all the technologies listed above, you might be better suited to taking a training class or going after an HCI-specific certification. Most HCI vendors offer training on their platforms to bring you and your team up to speed and help build a foundational knowledge base for you. Classes are offered through various authorized training centers worldwide. Some vendors offer HCI certifications—while it’s currently a very small amount, I believe this will change over time. Do a web search for HCI training and see what returns. There are many options to choose from depending on your level of HCI experience thus far.


Hands-on Experience

I saved the best for last, as you can’t get better training than on-the-job training. Learning as you go is the best route to becoming an HCI expert. Granted, certifications help validate your experience, and training helps you dive deeper, but hands-on experience is second-to-none. Making mistakes, learning from your mistakes, and documenting everything you do is the fastest way to becoming an expert in any field in my opinion. Unfortunately, not everyone can learn on the job, as most organizations cannot afford to have a production system go down, or have admins making changes on the fly without a prior change board approval. In this case, find an opportunity to build a sandbox or use an existing one to build things and tear them down, break things, and fix things. Doing this will help you become the HCI expert your organization desperately needs.

If you attended a major conference or read any industry press over the last handful of years, you’d be excused for thinking everyone would be running on virtual desktop infrastructure (VDI) by now. We’ve been hearing “It’s the year of VDI!” for years. Yet, for a reasonably mature technology, VDI has never had the expected widespread, mainstream impact. When the concept first started getting attention, as with many technologies, everyone latched onto the big value prop that VDI would provide a great cost savings. However, it’s not quite as simple. Total cost of ownership (TCO) is often touted as the key metric with any technology, so today I’d like to explore some of the less apparent costs which go into VDI you should consider when evaluating if it’s the year of VDI for your organization.


Software Costs

You’d think this would be the easy part of the equation. OK, so you’ve got your hypervisor. Chances are, you’re looking at the same VDI vendor you use for traditional VMs, but the licensing model for VDI differs from your traditional hypervisor. Where a traditional hypervisor will typically be sold on a per-processor basis, VDI licensing is usually sold on a device or per-user basis. There’s even bifurcation within the per-user basis—you may see licenses sold on a named user (Mary and Stan get licenses, but Joe’s been a bad boy, so no license for him) or a concurrent-user basis (I get “N” licenses and that’s how many people can connect at a given time.) Whichever path you follow should be directed by your use case. If you’re looking to solve for shift workers, follow the sun, or similar, you may want to consider concurrent users, so multiple people can take advantage of a single license. If you’ve determined you have specialized workers with specific needs (think power users) then a named license model might make sense. As it pertains to our cost discussion, the concurrent user licensing can apply to multiple people, and hence has a higher dollar value associated with it, whereas the named-user license models may have a smaller spend associated, but come at the cost of reducing flexibility.


That was the “easy” part of software element of the equation, but there are several other software considerations we need to consider to roll up in the TCO of your VDI proposal.



At a high level, when you think about all the various layers to a VDI solution, you need insight into the servers running the platform as well as their underlying infrastructure, the network carrying your VDI data, the hardware your users leverage to access their VDI desktop, and performance within the desktop OS itself. Does your existing monitoring platform have the capabilities to monitor all these elements? If yes, great! You just need to account for some portion of that in your cost calculations. If no, there’s a lot of homework in front of you, and at the end you’re going to need an additional purchase order to get the monitoring platform.


Application & Desktop Delivery

This big topic could be its own post, but how are you going to deliver desktops to your users and how are the applications going to be delivered within the desktop? Are you going to leverage the VDI vendor’s capabilities to deliver applications? Are the apps going to be virtualized? Some of these options come with higher-level licensing from your VDI platform provider, but if you go with a lower-tier VDI license, you might want a third-party delivery mechanism. Or you could do it manually, but we’ll come back to that in a minute.



At some level you’re just backing up a bunch of VMs, but does your current solution meet the unique needs of a VDI environment? If you’re deploying any persistent desktops at all, the backup design will look very different from the minimal needs a non-persistent design presents. Don’t forget delivery of your VDI solution will likely encompass a number of servers that should be considered for protection.


One last word on software costs: A C-level exec once said to me, “We don’t need to buy operating system licenses; we’re virtualized.” It doesn’t quite work this way. Take the time to understand the licensing agreements for your desktop OS. I promise you’ll want to proactively learn what they say before the vendor’s auditors come knocking.


Hardware Costs

The most obvious hardware cost is how you connect to your VDI environment. If you’re a BYOD shop, the job is done—just provide your users the agent they need. Typically though, you’re going to be evaluating zero clients against thin clients. Zero clients are essentially dumb terminals with little configurability and little flexibility, but it’s probably your cheapest option to purchase. Thin clients can cost the equivalent of a desktop PC, but you get a lot more horsepower, as they typically have better chipsets, memory, and graphics. Thin clients will usually support more protocols if you leverage multiple solutions as well. Know your users and understand their workloads to help you decide on client direction.


In my experience, storage plays a very important role in the success of the VDI project. Do you plan on leveraging persistent or non-persistent desktops? The answer will drive whether you need additional storage capacity to support persistent desktops or not. Have you ever experienced a boot storm? If you have, then you know your storage components can create a bottleneck affecting your user experience. Take the time to evaluate your IOPS needs and whether all the components of your storage sub-system can support everyone in the organization logging on at 9 a.m. on a Tuesday following a long holiday weekend. Failure to do so could result in an unexpected and potentially expensive “opportunity” for a new storage project.


Opportunities and Opportunity Cost

What you give up and gain from a VDI solution is probably going to be the hardest part to quantify but should be one of the larger drivers of the initiative. Troubleshooting a Windows desktop, for example, is a relatively straightforward process. What if the Windows machine is a VDI desktop? Once you’ve converted to a virtual desktop infrastructure, you now have to troubleshoot the OS, the connecting hardware, VDI protocol, network, hypervisor, storage, and so on. Does your organization have the appetite for the time and resource commitments to retool your team to handle this new paradigm? Conversely, anyone who’s had to patch hundreds or thousands of desktops (and deal with the fallout) will probably appreciate the simplicity of patching a single golden image.


How security-conscious is your organization? If data loss prevention is a big concern for you, then all the other costs may fall by the wayside, as a VDI solution provides a lot of security measures to better protect your organization right out of the box. What about offering seamless upgrades to your users? How much value would you place on that user experience? I know we find it highly valuable both from an effort and a goodwill perspective.


A lot of hidden costs and considerations can trip up your VDI initiative. While it’s hard to cover everything, hopefully this piece helps illuminate some dark corners.

A security policy based on actual use cases has been documented, as have the components of the ecosystem. Before devising a practical implementation and configuration plan, one more assessment should be done involving the application of best practices and compliance mandates.


Best practices are informative rule sets to provide guidelines for acceptable use, resource optimization, and well-known security protections. These rules may be derived from those commonly accepted in the information technology industry, from vendor recommendations and advisories, legislation, and specific business mandates.


In the more formal sense, best practices are outlined in frameworks built from industry standards. These standards define system parameters and processes as well as the concepts and designs required to implement and maintain them. Standards-based best practices can be used as guidelines, but for many entities, their application is mandatory.


Standards-Based Frameworks

Well-known open standards applicable to IT governance, security controls, and compliance are:


ISO/IEC 27000 (Replicated in various country-specific equivalents)

The Code of Practice for Information Security Management addresses control objectives and focuses on acceptable standards for implementing the security of information systems in the areas of:

  • Asset management
  • Human resources security
  • Physical security
  • Compliance
  • Access control
  • IT acquisition
  • Incident management


The 27000 framework is outlined in two documents:

  • 27001 – the certification standard for measuring, monitoring, and security management control of Information Security Management Systems

  • 27002 – security controls, measures, and code of practice for implementations and the methodologies required to attain the certification defined in 27001



Control Objectives for Information and Related Technology is a recognized framework for IT controls and security. It provides guidance to the IT audit community in the areas of risk mitigation and avoidance. It’s more focused on system processes than the security of those systems through control objectives defined in four domains: Planning and Organization, Acquisition and Implementation, Delivery and Support, and Monitoring and Evaluation.



The Payment Card Industry Security Standards Council (PCI SSC) defines Payment Card Industry (PCI) security standards with a focus on improving payment account security throughout the transaction process. The PCI DSS is administered and managed by the PCI SSC, an independent body created by the major payment card brands (Visa, MasterCard, American Express, Discover, and JCB). These payment brands and acquirers are responsible for enforcing compliance and may fine an acquiring bank for compliance violations.


Compliance-Mandated Frameworks

While also based on best practices, these frameworks focus on industry specific security controls and risk management. Compliance is mandatory and monitored by formal audits conducted by regulatory bodies to ensure certification is maintained in accordance with the industry and any legislation defined in a governing act. Failure to satisfy the criteria can leave an entity open to legal ramifications, such as fines and even jail time. Often standards-driven best practices documents, such as ISO 27002, are the foundation for application of requirements defined in each act.


Three of the most common regulatory and legislative acts are:


GLBA (Gramm-Leach-Bliley Act)

Primarily used by the U.S. financial sector and covers organizations engaging in financial activity or classified as financial institutions that must establish administrative, technical, and physical safeguard mechanisms to protect information. The act also mandates requirements for identifying and assessing security risks, planning and implementing security solutions to protect sensitive information, and establishing measures to monitor and manage security systems.


HIPAA (Health Insurance Portability and Accountability Act)

Applies to organizations in the health care sector in the U.S. Institutions must implement security standards to protect confidential data storage and transmission, including patient records and medical claims.


SOX (Sarbanes-Oxley Act)

Also known as the U.S. Public Company Accounting Reform and Investor Protection Act, it holds corporate executives of publicly listed companies accountable in the area of establishing, evaluating, and monitoring the effectiveness of internal controls over financial reporting. The act consists of 11 titles outlining the roles and processes required to satisfy the act, as well as reporting, accountability, and disclosure mandates. Although the titles don’t address security requirements specifically, title areas calling for audit, data confidentiality and integrity, and role-specific data access will require the implementation of a security framework such as ISO 27000 and/or COBIT.


Even if an organization doesn’t need to satisfy a formal mandate, understanding the content of well-defined security frameworks can ensure no critical data handling processes and policies are missed. If a formal framework is required, it will influence the tools and best practices methods used for policy implementation as well as monitoring and reporting requirements. These topics will be covered in the final two installments of this blog series.


The weather has turned cool as Autumn approaches, and everyone here is in back-to-school mode. In past years, September has been filled with events for me to attend. But this year there are none, giving me more time to enjoy sitting by the fire.


As always, here are some links I hope you find interesting. Enjoy!


More Than Half of U.S. Adults Trust Law Enforcement to Use Facial Recognition Responsibly

The results of this survey by Pew helps show people have no idea what civil liberties mean. You can't say it's acceptable for law enforcement to use this tech to track criminals but not acceptable to track your activities. That's not how this works.


Michigan bans flavored e-cigarettes to curb youth vaping epidemic

Finally, someone stepping forward to take action. My town wouldn't allow an adult bookstore because of the problems it *could* cause, but we have three vaping shops within walking distance of the high school.


Facebook Dating has launched in the United States

What's the worst that can happen?


Hundreds of millions of Facebook users’ phone numbers found lying around on the internet

As I was just saying, it is clear just how much Facebook values your security and privacy.


Ranking Cities By Salaries and Cost of Living

I've never been to Brownsville, but apparently that's where everyone would want to earn a paycheck.


Japanese Clerk Allegedly Stole Over 1,300 Credit Cards By Instantly Memorizing All the Numbers

I'm not even mad, I'm impressed.


Artificial Intelligence Will Make Your Job Even Harder

Interesting take on the dangers of automating away those boring tasks in your daily life.


Was exploring a new bike path and found this view as a result. It's always fun to discover new things not far from home.

Even after all these years in technology, I remain in awe of IT pros. Watching my kids’ classes, it seems everyone—including elementary school students and other civilians—is practicing truly geeky, hands-on-keyboards arts. We’re also seeing more casual administrators—people with a keen interest in spending some time managing networks and applications on the way to another role. But IT pros are different.


Like teachers, firefighters, and healthcare workers, IT pros tend to go where help is most needed. They endure simultaneously cacophonous and freezing server rooms, suffer the indignities of cost center budget processes, juggle multiple business teams with competing priorities, and regularly work nights, weekends, and holidays, all while presenting calm to assure end users. IT pros don’t work jobs. They’re called to be helpful.


And today they’re doing something they haven’t done in a decade in response to external forces. They’re jumping (mostly) without a net to fail fast, while learning new (and in some cases immature) solutions like hybrid, cloud-native, data science, automation, and more. They’re also accepting the push toward service-based licensing, even with the added specter of a career-limiting OpEx bill only a click—or API call—away.


And none of this would be possible, especially including the major changes business now demands in the pursuit of transformation, without IT professionals. These projects demand conviction, endurance, and creative thinking about how they’ll be maintained years from now. They drive new needs to engage business leaders and ask tough or politically unpopular questions as they modernize legacy apps. And these projects are cornucopias of unknowns and new risks only considerable experience and skill can mitigate.


That’s why I remain in awe of IT pros. And on IT Pro Day 2019, it’s important we recognize the people in tech who make the world work. Here’s to the dedicated men and women who’ve charted their careers by solving problems, enabling the business, and are always there for us whenever we need help.


Perhaps it’s IT pros who are the original five nines.

Recently, I was building out a demonstration and realized I didn’t have the setup I needed. After a little digging, I realized I wanted to show how to track changes to containers. This meant I needed some containers I could change, which meant installing Docker.


If this sounds like the usual yak shaving we IT professionals go through in our daily lives, you’d be right. And even if I told you I’d never spun up my own containers—or installed Docker, for that matter—you’d probably still say, “Yup, sounds like most days ending in ‘y.’”


Because working in IT means figuring it out.


I’d like to tell you Docker installed flawlessly; I was able to scan the documentation and a couple of online tutorials and get my containers running in a snap; I easily made changes to those containers and showcased the intuitive nature of my Docker monitoring demo.


I’d like to say all of those things, but if I did, you—my fellow IT pros—would know I was lying. Because figuring it out is sometimes kind of a slog. Figuring it out is more often a journey from a series of “Well that didn’t work” moments to “Oh, so this is how it’s done?” Or, as I like to tell my non-techie friends and relatives, “Working in IT is having long stretches of soul-crushing frustration, punctuated by brief moments of irrational euphoria, after which we return to the next stretch of soul-crushing frustration.”


That’s not to say we who make our career in IT don’t get lucky from time to time. But, as Coleman Cox once said, “I am a great believer in Luck. The harder I work, the more of it I seem to have.”


As we work through each day, solving problems, shaving yaks, and generally figuring it out, we amass to ourselves a range of experiences which—while they may be a bit of a slog at the time—increase not only our knowledge of how this thing (the one we’re dealing with right now) works, but also of how things work in general.


While it’s less relevant now, back in the day I used to talk about the number of word processors I knew—everything from WordStar to WordPerfect to Word—close to a dozen if you counted DOS and Windows versions separately. At the time, this was a big deal, and people asked how I could keep them straight. The answer was less about memory and more about familiarity born of experience. I likened it to learning card games.


“When you learn your first card game,” I’d point out, “it’s completely new. You have nothing to compare it to. So, you learn the rules and you play it. The second game is the hardest because it completely contradicts what you thought you knew about ‘card games’ (since you only knew one). But then you learn a third, and a fourth, and you start to get a sense of how card games in general work. There’s nothing intrinsically special about an ace or a jack or whatever, and card games can work in a variety of ways.”


Then I’d pull it back around to word processors: “After learning the third program, you realize there’s nothing about spell check or print or word-wrap unique to MultiMate or Ami Pro. And once you have a range of experience, you’re able to see how WordPerfect’s ‘Reveal Codes’ was totally unique.”


Which makes a nice story. But there’s more to it than that. As my fellow Head Geek Patrick Hubbard pointed out recently, those of us who mastered WordPerfect discovered learning HTML was pure simplicity, specifically because of the “reveal codes” functionality I mentioned earlier.

Image: https://2.bp.blogspot.com/-3B6KHm5x3JQ/WrrSvt1pIAI/AAAAAAAABvw/wQLhAE28Aak8AkI13Ylg0M8iJmZofgV5ACLcBGAs/s400/2-revealcodes.png


Anyone who knows HTML should feel right at home with the view on the bottom half of the screen.


Having taken the time to slog through WordPerfect (which was, in fact, the second word processor I learned), I not only gained skills and experience in using the software, but I unknowingly set myself up to have an easier time later.


And this experience was by no means unique—meaning I personally experienced many times when a piece of knowledge I’d struggled to acquire in one context turned out to be both relevant and incredibly useful in another; and my experience in this regard is not unique to IT professionals. We all have them. The experiences we have today all feed into the luck we have tomorrow.


So, on this IT Pro Day, I want to salute everyone in our industry who shows up, ready to do the hard work of figuring it out. May the yaks you must shave be small, and the times you find yourself saying “Wait, I already know this!” be many.


IT Pro Day Turns Five

Posted by sqlrockstar Employee Sep 10, 2019

Today is the 5th annual IT Pro Day, a day created by SolarWinds to recognize the IT pros who keep businesses up and running each and every day, all year long. Five years makes for a nice milestone, but in IT time it’s not. Many of you IT pros reading this likely support systems two or three times as old. 


As an IT pro myself, I know no one ever stops by your desk to say “thanks” for everything working as expected. That’s not the way the world works. I’ve never called to thank my cable company, for example. No, people only contact IT pros for one of two reasons: either something is broken, or something might become broken. And if it’s not something you know how to fix, you’ll still be expected to fix it, and fast.


And thanks to the ever-connected world in which we live, IT pros are responding to calls for help at all hours of the day. Not just work calls either. Family and friends reach out to ask for help with various hardware and software requests. Just a few weeks ago I had to show a friend who was struggling with some data how to create a PivotTable in Excel while sitting around a fire.


IT pros don’t do it for the money. We do it because it sparks joy to help others. Sure, the money helps bring home the bacon, but that’s not our end goal. We want what anyone wants: happy customers. And we make customers happy because we respond to alerts when called, we reduce risk by automating away repetitive tasks, and we fix things fast (and those fixes last, sometimes for years).


Today is the day to say THANK YOU to the IT pro, and even give a #datahug to the ones who had enough time to shower before heading to the office.



Sascha Giese

IT Pros and Pirates

Posted by Sascha Giese Employee Sep 10, 2019

Ah yes, it’s IT Pro Day again – very soon!


Each year, on the third Tuesday of September, we celebrate the unsung heroes of IT. Well, someone did sing, but that’s a different story.

IT Pro Day is certainly more important than Talk Like a Pirate Day, celebrated just two days later.
Long John Silver would probably disagree, but back in his day, there was no IT on ships, only curved swords.


I can’t think of another job where learning and growing are so essential but rewarding and, at the same time, so straightforward.
Depending on what niche you work in, you can develop your learning path, study, sit an exam, and broaden your horizon while opening new opportunities at work as soon as you start sharing your knowledge.
You’re in full control, and that’s one of the best things ever!

Plus, the convenience of working with at least two screens means there’s always room for a browser game.

Just in case you need to work overtime, while you’re ensuring the background scripts execute correctly. They could never do this without your supervision!

Another perk of working in IT is that you can decorate your desk with pictures of Wookiees and elves and unicorns and sharks with lasers and no one would ever judge.

They wouldn’t understand anyway.


Still, some days drag, like in many other jobs, too.

For example, there might be that person working in accounting who frequently causes trouble with, let’s say, Bitlocker.
As you fixed it for the third time, you wished you could go back to the days of BOFH in the 90s. But unfortunately, that’s not possible.
Close your eyes for a second. Think of pirates.

On that note, do you know the worst thing you could do as an IT pro? Showing up at a random party, having a glass of whatever, and someone asks you, “So, what do you do?”
Whatever you do, DO NOT respond with, “I work with computers.”

I’ve been there. It won’t end well.

Instead, I suggest saying, “I’m a pirate, arrr,” and focus on your drink.


Happy IT Pro Day everyone.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Brandon Shopp with suggestions on delivering a great user experience to staff and constituents. I’d say security is more important than UX for many of our government customers, but there are ways of making improvements more securely.


While much has been done about the need to modernize federal IT networks, little has been written about the importance of optimizing the user experience for government employees. Yet modernization and the delivery of a good user experience go hand-in-hand.


Here are three strategies you can employ to ensure a seamless, satisfactory, and well-optimized experience while disposing of headaches and enhancing productivity.


Understand Your Users


What applications do they need to do their jobs?

  • What tools are they using to access those applications?
  • Are they using their own devices in addition to agency-issued technology?
  • Where are they located?
  • Do they often work remotely?


Let’s discuss users who routinely use their personal smartphones to access internal applications. You’ll have to consider whether to authorize their devices to work on your internal infrastructure or introduce an employee personal device network and segment it from the main network. This can protect the primary network without restricting the employee from being able to use said device or application.


Similar considerations apply to federal employees who routinely work remotely. If this applies, you’ll want to employ segmentation to ensure they can access the applications they need without potentially disrupting or compromising your primary network.


Monitor Their Experience


Synthetic and real user monitoring can help you assess the user experience. Synthetic monitoring allows you to test an experience over time and discover how specific instances affected it. Real user monitoring lets you analyze real transactions as users interact with your agency’s applications.


Both of these monitoring strategies are useful on their own, but they really shine when layered on top of one another. A synthetic test may show everything running normally, but if users are experiencing poor quality of service, something is clearly amiss. Perhaps you need to allocate more memory or optimize a database query. Comparing the real user monitoring data with the synthetic test can give you a complete picture and help you identify the problem.


However, applications are only the tip of the spear. You also need to be able to see how those applications are interacting with your servers, so you can be proactive in addressing issues before they arise.


Obtaining this insight requires going a step beyond traditional systems monitoring. It calls for a level of server and application monitoring that allows you to visualize the relationship between the application and the server. Without understanding those interdependencies, you’ll be throwing darts in the dark whenever an issue arises.


Pre-Optimize the Experience


It’s also important to provide exceptional citizen experiences, particularly for agencies with applications that must endure periods of heavy user traffic.


These agencies can plan for periods of heavy usage by simulating loads and their impact on applications. Synthetic monitoring tools and network bandwidth analyzers can be instrumental in simulating and plotting out worst-case scenarios to see how the network will react.


If you’re in one of these agencies, and you know there’s the potential for heavy traffic, take a few weeks, or even months, in advance to run some tests. This will allow you to proactively address the challenges ahead—such as purchasing more memory or procuring additional bandwidth through a service provider—and “pre-optimize” the user experience.


All the investments agencies make towards network modernization are fruitless without a good user experience. If someone can’t access an application, or the network is horrendously slow, the investments won’t be worth much. Committing to providing great service can help users make the most of your agency’s applications and create a more efficient and effective user experience.


Find the full article on Federal Times.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Running a traditional data center infrastructure for years can put your company in a rut, especially when it’s time to pick a new solution. When electing to trade out your traditional infrastructure for a sleek new hyperconverged infrastructure (HCI), it can be a difficult paradigm to shift. So many questions can arise in selecting, and many HCI vendors are willing to answer those questions, which doesn’t necessarily make it easier. When deciding to switch to an HCI solution, it’s important to take stock of your current situation and assess why you’re searching for a new solution. Here are some things to think about when choosing an HCI solution.


Do You Have Experienced Staff?

Having staff on-hand to manage an HCI solution should be the main concern when choosing a solution. Traditional server infrastructures generally rely on several siloed teams to manage different technologies. When there are storage, networking, server, and security personnel, it’s important to decide if an all-in-one HCI solution is a possibility. Is there enough time to get your team spun up on the latest HCI solution and all the nuances it brings? Take a good look at your staff and take stock of their skillset and level of experience before diving headfirst into a brand-new HCI solution.


Support, Support, Support

Support is only considered expensive until it isn’t. When your new HCI solution isn’t working as planned or your team is having trouble configuring something, a support call can come in very handy. If the HCI solution you’re looking into doesn’t have the level of support to meet your needs, forget about it. It does no good to pay for support you can’t rely on when it all comes crashing down, which it can from time to time. Ensure the vendor’s support provides coverage for both hardware and software and offers support coverage hours suited to your needs. If you’re a government entity, does the vendor provide a U.S.-citizen-only support team? These are all important questions to ask of prospective vendors.


How Will the HCI Solution Be Used?

First things first, how will you be using the HCI solution? If your plan is to employ a new HCI solution to host your organization’s new VDI implementation, specific questions need to be addressed. What are the configuration maximums for CPU and memory, and how much flash memory can be configured? VDI is a very resource-intensive project, and going into the deployment without the right amount of resources in the new HCI solution can put your organization in a bad spot. If the idea of HCI procurement is coming specifically from an SMB/ROBO situation, it’s extremely important to get the sizing right and ensure the process of scaling out is simple and fast.


Don't Get Pressured Into Choosing

Your decision needs to come when your organization is ready, not a vendor’s schedule or pressure to commit. Purchasing a new HCI solution is not a small decision and it can come with some sticker shock, so it’s important to choose wisely and choose what’s right for your organization. Take stock of the items I listed above and decide how to proceed with all the vendor calls, which will flood your phones once you decide you’re looking for a new HCI solution.

In this third post on the subject, I’ll discuss hyperconverged architectural approaches. I want to share some of my experiences in how and why my customers have taken different choices, not to promote one solution over another, but to highlight why one approach may be better for your needs than another.


As I’ve alluded to in the past, there are two functional models to hyperconverged approach. The first is the appliance model, a single prebuilt design with a six or eight rack-unit box, comprised of X86 servers, each with their own storage, shared across the entire device housing an entire virtual infrastructure. In the other model, devices are dedicated to each purpose (both storage and compute), where one or the other is added as needed.


The differing approaches make for key financial elements in how to scale your architecture. Neither is wrong, but either could be right for the needs of the environment.


Scalability should be the first concern, and manageability should be the second when making these decisions. I’ll discuss both those issues, and how the choice of one over the other may affect how you build your infrastructure. As an independent reseller with no bias toward any vendor’s approach, these are the first questions I outline to my customers during the HCI conversation. I also get the opportunity to see how different approaches work in implementation and afterwards in day-to-day operations. Experience should be a very important element of the decision-making process. How do these work in the real world? After some time with the approach, is the customer satisfied they’ve made the correct decision? I hope to outline some of these experiences, give insight to the potential customer’s perspective, highlight some "gotchas," and aid in the decision-making process.


A word about management: the ability to manage all your nodes, ideally through vCenter or some other centralized platform through one interface, is table stakes. The ability to clone, replicate, and backup from one location to another is important. The capacity to deduplicate your data is not part and parcel of every architecture, but even more important is the ability to do so without degrading performance. Again, this is important for the decision-making process.


Scalability is handled very differently depending on your architecture. For example, if you run out of storage on your cluster, how do you expand it? In some cases, a full second cluster may be required. However, there are models in which you can add storage-related nodes without having to increase the processor capacity. Same holds true for the other side. If you have ample storage but your system is running slowly, you may need to add another cluster to expand. In those cases, the node-by-node expansion capacity can be increased by adding compute nodes. This may or may not be relevant to your environment, but if it is, this should be considered as an ROI part of the scalability you may require for your environment. I believe it’s a valuable consideration.


In my role as an architect for enterprise customers, I don’t like conversations in which the quote for the equipment is the only concern. I much prefer to ask probing questions along the lines I’ve discussed above to help the customer come to terms with their more important needs and make the recommendation based on those needs. Of course, the cost of the environment is valuable to the customer. However, when doing ROI valuation, one must account for the way the environment may be used over the course of time and the lifecycle of the environment.


In my next post, I’ll discuss a bit more about the storage considerations inherent to varying approaches. Data reduction, replication, and other architectural approaches must be considered. But how? Talk to you in a couple weeks, and of course, please feel free to give me your feedback.

Had a great time at VMworld last week. I enjoyed speaking with everyone who stopped by the booth. My next event is THWACKcamp™! I've got a few more segments to film and then the show can begin. I hope to "see" all of you there in October.


As always, here are some links I hope you find interesting. Enjoy!


VMware Embarks on Its Crown Jewel’s Biggest Rearchitecture in a Decade

Some of the news from VMworld last week. Along with their support for most public clouds (sorry Oracle!), VMware is pivoting in an effort to stay relevant for the next five to eight years.


Google says hackers have put ‘monitoring implants’ in iPhones for years

The next time the hipster at the Genius Bar tries to tell me Apple cares about security, I'm going to slap him.


Amazon's doorbell camera Ring is working with police – and controlling what they say

This really does have "private surveillance state" written all over it.


Volocopter’s air taxi performs a test flight at Helsinki Airport

At first I thought this said velociraptor air taxi and now I want one of those, too.


Fraudsters deepfake CEO's voice to trick manager into transferring $243,000

Interesting attack vector with use of deepfake tech. Use this to raise awareness for similar scams, and consider updating company policies regarding money transfers.


About the Twitter CEO '@jack hack'

Good summary of what happened, and how to protect yourself from similar attacks not just on Twitter, but any platform that works in a similar manner.


Employees connect nuclear plant to the internet so they can mine cryptocurrency

What's the worst that could happen?


From the VMworldFest last week, a nice reminder that your documentation should be kept as simple and concise as possible:

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.