1 3 4 5 6 7 Previous Next

Geek Speak

2,917 posts

We’ve all heard it before.

 

“The cloud is the future!”

“We need to move to the cloud!”

“The on-premises data center is dead.”

 

If you believe the analysts and marketing departments, public cloud is the greatest thing to happen to the data center since virtualization. But, is it true? Could public cloud be the savior of the IT department? While many features to the public cloud make it an attractive infrastructure replacement, failure to adequately plan for its use can prove to be a costly mistake.

 

Moving past the marketing, the cloud is simply “someone else’s computer.” Yes, it’s more complicated than that, but when you boil it down to the basics, it’s a data center maintained by a third-party with proprietary software on top to provide an easy-to-use dashboard for provisioning and monitoring. When you move to the cloud, you’re still running an application on a server. Many of the same problems you have with your application running on-premises can persist in the cloud.

 

In a public cloud environment, the added complexity of multi-tenancy on the underlying resources can complicate things. Now you have to think about regulatory compliance? And after all, public cloud is still a data center subject to human error. This has been made evident over and over, famously by the Amazon Web Services S3 outage of February 2017.* The wide adoption of public clouds such as AWS and Microsoft Azure has also opened the door to more instances of shadow IT. Rogue devs, admins, and end users who either don’t have the patience to wait or have been denied resources opening cloud accounts with their own credit cards and putting corporate data at risk. And, we have yet to even take into consideration the consumption-based billing model.

 

Even with the above listed “issues” (I put quotes around issues as some of the problems can be encountered in the private cloud or worked around), public cloud can be an awesome tool in the IT administrator’s toolbox. Properly architected cloud-based applications can alleviate performance issues and can be developed with robust redundancies to avoid downtime. The ability to quickly scale compute up and down based on demand provides the business amazing agility not before seen in the standard data center procurement cycle. And, the growing world of SaaS products provides an easy gateway to enter the cloud (yes, I’m going to take the stance that as-a-Service qualifies as cloud). The introduction of cloud technologies has also opened a world of new application deployment models such as microservices and serverless computing. These amazing ways of looking at infrastructure weren’t possible until recently.

 

Is there hype around public cloud? For sure! Is some of it warranted? Absolutely! Is it the be-all and end-all technology of the future? Not so fast. In the upcoming series of posts I’m calling “Battle of the Clouds,” we’ll look at public cloud versus private cloud, going past the hype to dive into the state of on-premises data centers, what it takes for a successful cloud implementation, and workload planning around both solutions. I look forward to hearing your opinions on this topic as well!

 

*Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp with ideas for protecting IT assets in battlefield situations.

 

“Dominance” and “protection” sum up the Defense Department’s goals as U.S. armed forces begin to modernize their networks and communications systems. DOD is investing significant resources in providing troops with highly advanced technology so they can effectively communicate with each other and allies in even the harshest environments.

 

Efforts like the Army’s ESB-E tactical network initiative, for example, represent an attempt to keep warfighters constantly connected through a unified communications network. These solutions will be built off more scalable, adaptable, and powerful platforms than those provided by older legacy systems.

 

Programs like ESB-E are being designed to provide wide-scale communications in hostile territory. It will be incumbent upon troops in the field to monitor, manage, and secure the network to fulfill the “protection” part of DOD’s two-fisted battlefield domination strategy.

 

Moving forward to take this technological hill, DOD should keep these three considerations in mind.

 

  1. 1. The Attack Surface Will Increase Exponentially

 

Over the years, the battlefield has become increasingly kinetic and dependent upon interconnected devices and even artificial intelligence. The Army Research Laboratory calls this the internet of battlefield things—a warzone with different points of contact ultimately resulting in everything and everyone being more connected and, thus, intelligent.

 

The Pentagon is looking to take the concept as far as possible to give warfighters a tactical and strategic edge. For example, the Army wants to network soldiers and their weapons systems, and the Navy plans to link its platforms across hundreds of ships.

 

Opening these communication channels will significantly increase the potential attack surface. The more connection points, the greater the threat of exposure. Securing a communications system of such complexity will prove to be a far more daunting challenge than what’s involved in monitoring and managing a traditional IT network. Armed forces must be prepared to monitor, maintain, and secure the entire communications system.

 

  1. 2. Everyone Must Have Systems Expertise

 

The line between soldiers and system administrators has blurred as technology has advanced into the battlefield. As communications systems expand, all service members must be able to identify problems to ensure both unimpeded and uninterrupted communications and the security of the information being exchanged.

 

All troops must be bought into the concept of protecting the network and its communications components and be highly skilled in managing and maintaining these technologies. This is particularly important as communications solutions evolve.

 

Soldiers will need to quickly secure communications tools if they’re compromised, just as they would any other piece of equipment harboring sensitive information or access points. And they will require clear visibility into the entirety of the network to be able to quickly pinpoint any anomalies.

 

  1. 3. Staff Must Increase Commensurate to the Size of the Task

 

The armed forces must bulk up on staff to support these expansive modern communications systems. Fortunately, the military has a wealth of individuals with network and systems administration experience. Unfortunately, they lack in other critical areas.

 

Security specialists remain in high demand, but the cybersecurity workforce gap is real, even in the military. The White House’s National Cyber Strategy offers some good recommendations, including reskilling workers from other disciplines and identifying and fostering new talent. The actions highlighted in the plan coalesce with DOD’s need to fortify and strengthen its cybersecurity workforce as it turns its focus toward relentlessly winning the battlefield communications war.

 

Whoever wins this war will truly establish dominance over air, land, sea, and cyberspace. Victory lies in educating and finding the right personnel to protect information across what will undoubtedly be a wider and more attractive target for America’s adversaries.

 

Find the full article on Government Computer News.

 

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

We talk a lot about professional development these days—how to develop your speaking or writing skills to make you a better practitioner. But what if you’ve reached a certain point in your career and you feel it’s time for a change?

 

Do you stay a technologist, or do you make the leap to management? It’s a conversation I’ve had with many people over the years and a question I’ve wrestled with myself. Several years ago, I made the jump. At the time, it was the wrong choice. In the following years, I spoke with many leaders and gathered many thoughts on what makes for a successful and satisfied IT leader. Below are the top five considerations I wish someone had shared with me prior to taking the management path.

 

How in Love With Technology Are You?

We could debate the best course, but often the person promoted to management is the one who knows the most about their technology. Ironically, once in management, you aren’t as hands-on as you once were. If all you want to do is geek out on tech, I’d urge you to be cautious about moving into management. Talk with your boss and other leaders to understand how much tech will (or won’t) be a part of the new management position.

 

How “Confident” Are You?

This question is something of a misnomer. What I really mean is how are your soft skills and do you enjoy leveraging them? Once you drop into management, you’ll still need to be able to talk the talk and periodically walk the walk of a technologist, but your ability to communicate will become much more important. The same can be said of writing, presentation, and financial skills. Taking some time to evaluate what you truly enjoy doing and where your skills lie will make for a much more informed decision on whether you should take the management plunge.

 

Do Your Thoughts/Ideals/Morals Align With Your Organization?

While it’s possible to make changes affecting your workplace once you reach management, I’d caution you against thinking you can solve everything wrong with the world and steer your company in a completely different direction. Take the time to look at how your organization treats people. Are other managers satisfied in their roles? Do you generally agree with the direction and leadership of your organization? Depending on the type of person you are, this isn’t a make-or-break question. Regardless of your role, it’s often easier and more effective to row in the same direction as your organization.

 

How Thick Is Your Skin?

This one is simple: when you lead a team, you’re ultimately responsible for the team’s actions and productivity within the organization. On the surface, this seems like a noble charge, and it is. However, there will be days where people just want to yell at whomever is in charge—you. I work within a fantastic, compassionate organization, and there are still days where “The Buck Stops Here” means I’m going to get earfuls from various corners of the org. It goes with the territory. Are you OK finding yourself in this position? It’s a question you’ll want to consider.

 

From What Do You Derive Value?

I’m not talking about financial reward of moving into management, although hopefully there’s something of a reward. What I’m talking about is how you motivate yourself, where the sense of accomplishment comes from. Basically, what makes you tick?

For me, this was the biggest adjustment in going from an individual contributor role to management. The reward systems can be very different. As an individual contributor, the feedback is often immediate in that, when you achieve or accomplish a task, there’s immediate satisfaction from completion. Many technologists love this aspect of IT—you’re constantly getting things done, you’re a doer. Depending on the size of your org, when you move into management, you’ll be removed from at least some of the day-to-day “doing.” So, where do the rewards come from?

 

I’ve had this discussion with leaders of varying sizes of organizations and a couple of themes come to the fore across these chats. Successful and satisfied managers talk about the joy from helping develop people. Being an empathetic person and seeing others succeed can become a powerful replacement for that sense of accomplishment. Having a strategic mindset and seeing your place in a larger puzzle can help ease the loss of the day-to-day tactical achievements.

 

Is Management Right for Me?

Well, nobody can decide if it’s right for you or not. I’ll share one final thought with you, though. At the start of this piece, I said I was wrong about going into management. That was true. In that role, I wasn’t a great manager, and I voluntarily went back to an engineering role. I’ve since made the jump again, in a situation better suited to me and for an organization who helps to develop my skills and wants to see me succeed as a manager. I have a better idea of what to expect and I’m much happier this time around. Whether you decide to make the leap into management or not, know it’s not a one-way street. Whichever way you decide to go, I hope the thoughts here help you on your journey.

 

I like sharing my thoughts on the softer side of IT. If you’ve found this article interesting, please check out my other articles The Most Important Skill You're Not Hiring For and Know Your IT Department Frenemies (aka Why Can’t We Be Friends?).

Everyone take a deep breath and calm down. The likeliness of a robot taking over your job any time soon is very low. Yes, artificial intelligence (AI) is a growing trend, and machine learning has improved by leaps and bounds. However, the information technology career field is fairly safe, and if anything, AI/machine learning will only make things better for us in the future. However, a few IT jobs already have experienced the impact of AI, and I want to cover those here. Now, take this with a grain of salt, since AI/machine learning technology is fairly young and a lot of the news out there is simply conjecture.

 

Help Desk/IT Support

Think about the last time you called a support desk. Did your call get answered by a human or a robot? OK, maybe not an actual robot (that would be awesome), but an interactive voice response (IVR) system. How annoying is that? How often do we just start yelling, “Representative... representative... REPRESENTATIVE!” It can take several routes from an IVR system before we get a human who can help us out. This is all too often the situation when we call support or the help desk. Unfortunately (for help desk specialists), AI is only making IVR more efficient. AI enhances the capability of the IVR system to better understand and process human interaction over the phone. With IVR systems configured for automatic speech recognition (ASR), AI essentially eliminates the need for input via the keypad as it can more intelligently process the human voice response.

 

Data Center Admins

This one hurts because I’ve done a lot of data center admin work and still do some today. The idea of machine learning or AI replacing this job hits close to home. The truth is automation tools are already replacing common tasks data center admins used to carry out daily. Monitoring tools have used AI to improve data analytics pulled from system scans. Back when I started in IT, the general ratio was around one hundred systems to one administrator. With advances in monitoring, virtualization and AI, it’s now closer to one thousand systems to every administrator. While this is great for organizations looking to cut down on OPEX, it’s not great news for administrators.

 

Adapt or Die

Yeah, maybe that’s a little exaggerated, but it’s not a bad way to think. If you don’t see the technological advances as a hint to adapt, your career likely will die. AI and machine learning are hot topics right now, and there’s no better time to start learning the ins and outs of it and how you can adapt to work with AI instead of becoming obsolete. Understanding how to bridge the gap between humanity and technology can serve you well in the future. One way you can adapt is by learning programming, thereby gaining a better understanding of automation, AI, and machine learning. Maintain your creativity by implementing new ideas and using AI and machine learning to your advantage.

 

In the end, I don’t believe AI or machine learning will eliminate the need for a human workforce in IT. The human brain is far more adept at storing, processing, and analyzing data than any robot or machine will ever be. The human brain can adapt, learn, and connect with other humans in ways machines can’t. There might be an influx of jobs being taken over by AI, but we’ll always need humans to program the software and design the machines.

In the IT industry, you’ll hear “I’ll sell you a DevOps; how much is it worth?” But the joke’s on you because you can’t sell (or buy) DevOps, as it is, in fact, an intangible entity. It’s a business process combining software development (Dev) and IT management processes (Ops) with the aim of helping teams understand what goes into making and maintaining applications and business processes. All this happens while working as a team to improve the overall performance and stability of said apps and processes rather than “chucking it over the fence” once your department’s piece of the puzzle is finished.

 

DevOps is often referred to as a journey, and you probably need to pass several milestones before you could consider your company a DevOps house. Several of the major milestones stem from the idea of adopting a blue/green method of deployment, in which you deploy a new version of your code (blue) running alongside the current version (green) and slowly move production traffic over to the new blue deployment while monitoring the application to see if improvements have been made. Once all the traffic is running on the blue version, you can stage the next change on the green environment. If the blue deployment is a detriment to the application, it’s backed out and all traffic reverts to the current green version.

 

A key part of the above blue/green deployment is a methodology of continuous integration and continuous deployment (CI/CD), whereby minor improvements are always being undertaken with the goal of optimizing the software and the hardware it runs on. To get to this point you need to make sure you have a system in place to continuously deploy to production, as well as a platform for continual testing. Your QA processes need to tackle everything from user integration to vulnerability testing and change management, and since you don’t want to have to be hunting around finding IP addresses or resource pools to run it on, automation is going to be key.

 

As you move towards CI/CD adoption rather than separate coding and testing phases, you begin to test as the code is being written. In turn, you’ll start to automate this testing and eventual movement into production, which is referred to as a deployment pipeline. Finally, you’ll also need a more detailed way of performance monitoring, hardware monitoring, software monitoring, and logging. With performance monitoring, it’s no longer good enough to look at network latency—you need to have a way to understand the performance process, including the IO to an application stack, the amount of code commits and bugs identified, the vulnerabilities being handled, and the environment’s health status. With so many moving parts, you’ll also need something to ingest the logs and give you greater insights and analysis to your environment.

 

But for all this to be undertaken, the first and possibly most major hurdle you’ll have to clear is the cultural shift within the organization. Willingness to cooperate truthfully and honestly as well as making failure less expensive is at the core of this shift. This cultural move must be led from the top down within the company. Making IT ops, software development, and security stop pointing the finger at each other and understand they all have a shared responsibility in the other departments’ undertaking can be a challenge, but if they’re properly incentivized and understand the overall goal, this shift can be a smoother process for an organization.

 

This building of the correct foundation as per the above milestones allows you thus to move from getting started into the five stages of DevOps evolution: Normalization, Standardization, Expansion, Automated Infrastructure Delivery, and Self-Service. Companies moving into the Normalization stage adhere to true agile methods, and the speed at which they invoke changes begins to increase, so with time they’re no longer hanging around like a loris, taking days or weeks to patch critical vulnerabilities, but move and adapt with the speed of a peregrine falcon.

 

In the recent Puppet 2019 State of DevOps report, they try to raise the idea of improving your security stance by moving through the five stages of evolution so you can adapt quickly to vulnerabilities. For instance, about 7% of those surveyed can respond within an hour. Those organizations with fully integrated security practices have the highest levels of DevOps evolution. This evolution, in turn, will let you soar through the clouds.

ian0x0r

Meh, CapEx

Posted by ian0x0r Nov 7, 2019

Do you remember the good old days when you saved up your hard-earned cash to buy the shiny thing you’d been longing for? You know, like that Betamax player, the TV with the lovely wood veneer surround, or a Chevrolet Corvair? What happened to those days?

 

I remember as a young lad growing up in the U.K., there were shops like Radio Rentals or Rumbelows where you could get something “On Tick.” ¹ It didn’t seem like the done thing at the time; it almost seemed like a dirty word. “Hey, have you heard, Sheila got her fridge-freezer on tick.” The look of shock on people’s faces!

 

Now fast forward 25 years and here we are—almost everything is available on a buy now, pay later or rental model. Want a way to access films quickly? Here’s a Netflix subscription. Need something to watch them on? Hey, here’s a 100” Ultra HD 8K, curved, HDR, smart TV with built-in Freeview and Freesat, yours for 60 low monthly payments of £200. Need a new set of wheels? No problem. Something like 82% of all new cars in the U.K. are purchased with a PCP payment plan. This is people reaching for the shiny thing and being able to get it when previously they couldn’t or shouldn’t.

 

The OpEx Model

 

So, what’s my point and how is this relevant to the IT landscape?

 

First and foremost, rental models can work out more profitably for the vendor selling the goods. Slap a little interest premium on the payments as the cost is spread over a longer term. A rental income is more predictable as well, the annuity is the name of the game.

 

Incentivizing the rental model makes it look more attractive. Look at the cost of an iPhone, for example. Each new release gets more and more expensive. My first four cars combined cost less than a new iPhone 11 Pro. But Apple (and others) are smart. Pay your monthly sum of cash and every 12 months, they’ll give you a new phone. And oh heck, if you drop it and smash the screen, no problem—it’s covered in the monthly cost. You don’t get that service if you buy an iPhone upfront with your hard-earned cash. It makes customers sticky, as in they’re less likely to move elsewhere.

 

Now let’s look at IT vendors. Microsoft, Amazon, Dell, HPE, you name it, their rental model fits nicely into an OpEx purchasing plan.

 

There’s no denying Microsoft has killed it with Office 365. This is something I’ll dive deeper into in an upcoming blog post. AWS and all public clouds allow you to pay for what you consume, on a monthly basis, although this cost isn’t always predictable. 

 

Even hardware vendors are at it. Dell can wrap up your purchase in a nice finance package spread over 3 – 5 years. HPE has their Green Lake program, which is, and I quote, “Delivering the As A Service experience. We live in a consumption-based world—music, TV shows, groceries, air travel and much more. Why? Because consuming services delivers a better outcome faster, in our personal lives and in business. It’s true for music, and it’s also true for IT.” ²

 

Conclusion

 

So why Meh, CapEx? In an increasingly diverse IT landscape, with more and more solutions delivered As-A-Service, coupled with the increased pace of innovation, it can be difficult to predict costs and get it right with a CapEx investment lasting 3 – 5 years. I mean, who wants to still be rocking an iPhone 11 in 2024?

 

¹ On Tick – To pay for something later, via Urban Dictionary https://www.urbandictionary.com/define.php?term=get%20it%20on%20tick

² HPE Green Lake Quote https://www.hpe.com/uk/en/services/it-consumption.html

It’s hot in Dubai. I mean for-real hot, and I’m a native Texan. It’s blistering, cloudless, equatorial sun Northern Hemispher-ians can’t imagine, plus the four-nines humidity of the Mississippi Gulf Coast. Mid-day it’s as shocking as a windy February night in Chicago. It hits you in the face and chest when you walk outside. The unintended effect of which isunlike at US and EMEA conferences where attendees have multiple distractions to considerat GITEX, 100,000 humans were happy to be together for a week. The A/C was glorious. Perhaps the communal relief of the indoors was a foundation for great conversationsand several surprisesat my first conference in the Middle East.

 

Friends who’ve spent time in the Gulf essentially said, “PFFT. Dubai is the Las Vegas of the region.” And in some ways, that’s true. It’s a convention city, a showplace, and has similar attractions, just turned up to 11. But, while it’s also westerner-friendly, it doesn’t dump everyone into the same cacophonous strip of mostly Nerfed vice and titillation as a gimmick. Instead, it finds a way to accommodate multiple cultures in close proximity. Although, you might wave off the pulled pork tacos at the Tex-Mex place at Le Meridian, especially if you’re from Austin.

 

It’s likely Dubai is requirements-based rather than organically developed, i.e., it’s intentional and planned like (good) IT. That may be what lead to the second of my surprisesIT professionals attending GITEX were more like US IT pros than some other regions I’ve visited. Typically, the dress and conversation at non-US events is more formal and led by managers instead of geeks. But at GITEX, attendees are the actual engineers who manage data centers, deploy and troubleshoot apps, and who are trying to get a handle on evolving cloud monitoring. But they’re also warm and gracious, patiently restating complex questions to this American, and taking the time to ensure English comms didn’t obscure nuance and technical detail. That’s not easy with acronym- and buzzword-overloaded technology, and it reiterated how eager they were to make real changes to operations, not just play with gear.

 

It was also different in that while most US SolarWinds customers work with us directly, international customers often work through our partners. GITEX was my first show with two of our regional distributors in the booth with us, Clever Distribution and Spire Solutions. I’ve been fortunate to speak at partner events, but this was my first chance to watch lines of attendees visit and say hi to the people they trust to help select tools for their businesses. Apparently smiles, handshakes, and selfies aren’t only for the THWACK® community at Cisco or Microsoft conferencesour partners have their own active social cosmos. It also let me sip more lattes and answer more “impossible” questions, the answers to which are usually, “oh, that’s been in there since version 12.x, can I show you how to get started?” and “when’s the last time you updated? Did you see the new upgrade tool? (Hint-hint, What We’re Working On, Upgrade Planning, ahem).

 

But I think what’s stayed with me most, were the number of displays featuring not the just new tech, but new tech in production improving human experience. In one case, tech was even touching hearts. There was med-IoT on ambulances in the Dubai EMS fleet, recent investments in citizen data security and privacy, lots of Smart City tech, and a big focus on useful 5G without solely focusing on vapor deployment. In fact, the largest and most tradeshow-beautiful hall was dedicated to private-public partnership projects, with row after row of not only cool toys, but well-designed projects. The ratio of useful technology to shiny and new was refreshing.

 

But one stand packed in among all the IT tech, stood out in an unexpected waya Hajj virtual reality simulator. Saudi Arabia’s Ministry of Hajj and Umrah sponsored a stand for Karachi-based Labbaik VR, who’ve spent years developing a PC-based training system for pilgrims planning to perform their first Hajj or Umrah. But recently, they combined their cloud-scale 360 world data with VR headsets. The result is an immersive 3D experience.

 

It was the sort of technology a bunch of engineers like us would build for fun, but because of the domain, it’s transcending entertainment to truly touch people who use it. Hajj is the fifth pillar of Islam, and Muslims who are physically and financially able have a duty make the pilgrimage to Mecca at least once in their lifetime. But in reality, not all can make it for many reasons, including cost, health, and political boundaries. Speaking with developers from the Labbaik VR team, I heard what it was like as users transitioned from the PC to the 3D headset version. What had been a helpful and engaging simulation became an emotional experience, especially for those who’d likely never get a chance to make the pilgrimage and fulfill their duty. Joy, wonder, and even tears weren’t uncommon.

 

Perhaps it’s being in technology a long time, or maybe getting my start in customer service and transportation systems, but as technology and human activity become more and more closely intertwined, what we do as IT pros increasingly matters. Sometimes it seems like we’re consumed by never-ending troubleshooting to trim a few milliseconds off a web transition, stuff yet another 100 VMs into our existing clusters, figure out how we’re going to get a handle on containers with no orchestration budget, and on and on. But apps delivering social services, data analysis identifying and correcting long-held biases, and distributed tech touching or even saving lives is becoming more and more common.

 

Yes, we still must do All the Enterprise Things, but it’s never been a more fulfilling time to pay your mortgage with what we do at our keyboards. For those in Dubai, thanks for coming by the stand. It was, as always, a pleasure to speak in person and not just on THWACK. But more than that, thanks for the stories you shared about your IT adventures throughout the Middle East and Africa. The more I travel around the world for SolarWinds the more I’m in awe of the breadth of work you all do. If tech is the universal language, THWACK members are particularly fluent.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp about improving systems management for Microsoft environments. He explores a range of considerations.

 

Microsoft offers some of its own monitoring options, such as System Center and Windows Admin Center. Federal IT pros can optimize performance by including additional monitoring strategies such as monitoring Windows servers, Microsoft applications, databases, Hyper-V, Azure, and Office 365.

 

Monitoring Strategies

 

Windows Servers

Identifying a performance issue involves understanding what’s not operating efficiently. In a Microsoft environment, this means knowing the operating system isn’t part of the problem.

 

To gain this knowledge, consider tools capable of focusing on the Windows servers to provide highly-specific information and help pinpoint—or rule out—a server-based issue.

 

Microsoft Applications

It can be impossible to truly understand application health—and, in turn, performance—without understanding how well Microsoft application services, processes, and components are operating.

 

To get this critical information, consider a tool that gives the federal IT team the ability to:

 

•Isolate page-load speeds based on location, application components, or underlying server infrastructure

•Monitor requests per second, throughput, and request wait time

•Identify the root cause of problems by monitoring key performance metrics, including request wait time and SQL query executing time

•Identify which webpage elements are slow and affect overall webpage application performance

 

A greater understanding of the performance levels of the processes feeding in to and out of applications can prove invaluable when trying to identify higher-level application performance issues.

 

Databases

Every federal IT pro knows monitoring database performance is a must.

 

Specifically, be sure to invest in a tool with the ability to troubleshoot performance problems in real-time and historically. The historical perspective will allow the team to identify a baseline, so they can better understand the severity of a slowdown. This perspective will then allow the ability to analyze the database workload to identify inefficiencies. Ideally, the tool of choice will also provide SQL Server index recommendations as well as alerting and reporting capabilities.

 

Hyper-V

For optimized virtual infrastructure performance, be sure to optimize Microsoft Hyper-V—the company’s virtualization platform.

 

One of the best ways to do this is by understanding and optimizing the size of virtual machines through capacity planning. It’s also possible to take this even further by predicting the behavior of the virtual environment and solving potential issues before they escalate.

 

Not all tools will provide these capabilities, so choose wisely.

 

Azure

Many federal IT pros believe cloud monitoring is in the hands of the cloud provider. Not so. It’s possible—and highly recommended—to monitor the cloud infrastructure and transit to help ensure optimized system and application performance.

 

For example, a good tool will provide the ability to monitor Azure-based applications with as much visibility as on-premises applications. A better tool will go even further and allow the federal IT pro to measure the performance of each network node inside the cloud and to analyze historical performance data to pinpoint a timeframe if performance has degraded.

 

Microsoft offers a tool called Azure Monitor, which allows the federal IT pro to collect performance and utilization data, activity and diagnostics logs, and notifications from various Azure resources. Azure Monitor integrates with other analytics and monitoring tools, which is a plus for larger environments supporting a range of different types of products and services from a range of vendors.

 

For further peace of mind—and to help protect against data loss—look for the ability to back up emails to a secondary location.

 

Conclusion

 

Operating in a Microsoft-centric world doesn’t mean the federal IT pro must rely only on Microsoft products and services to help optimize performance. Yes, Microsoft has excellent options. But more out there can go a long way toward ensuring a top-performance environment on site or in the Azure cloud.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

We come to you today with a late-breaking breaking postscript! When we last left our IT hero, they were asking themselves deep and probing questions like Will I Save Money by Moving to the Cloud and I’m Still On-Premises, Is That OK? As they go about modernizing their IT organization, they cry out, “What if IaaS isn’t not for me?” Never fear—SaaS is here!

 

As we laid out in our previous reports from the data center, a SaaS solution is broadly defined as:

  • SaaS (Software-as-a-Service) You subscribe to, access, and consume software that lives and is managed elsewhere.

 

What Does It Really Mean?

Do you like running an email server? A sales platform? Chat? Antivirus? Security software? File sharing, help desk? The list goes on and on. Let’s cut to the chase. With a SaaS platform, you get to offload many of the traditional management aspects of IT management (is the solution up? Does it need more space? How do I upgrade it?) to someone else and take advantage of all the benefits.

 

Seems simple enough, right? And maybe it is, but as with any other solution or architecture, you need to ask yourself a few questions. This analysis is especially important if you work in the SMB space as your resources are almost certainly constrained in some way, whether that be people, time, money, leadership, etc. If an enterprise wants to test the water on something like a SaaS solution, where they spend several million dollars and it just doesn’t work out, the decision may end up being immaterial. If you spend weeks trying out a solution (any solution!) while spending tens of thousands, or even hundreds of dollars in some cases, and the solution tanks, the vibrancy of your organization can be negatively impacted.

 

OK, so you want to give this SaaS thing a chance. That SaaS platform sure looks appealing at first glance, but what questions do you need to ask within your org to help ensure success when you’re looking to adopt a new cloud offering?

 

  • Efficiencies. As always, what’s driving you to look at a cloud-based offering? Are your people driving sales or A/P off a spreadsheet or perhaps spending hours swivel-chairing data from one place into another? In short, can you derive value from simplifying your activities and increasing efficiency? Yes? Congrats—you’re ready to continue exploring SaaS solutions for your business.

 

  • Costs. People are expensive. Software is expensive. People managing software is EXPENSIVE! With Software as a Service, you can kick the non-value add parts of the equation out the door and just consume the solution.

 

  • What do you want to work on? At first glance, this seems very similar to the above, but with a slightly different spin. This speaks to the heart of your organization and your mission. If you run a custom cabinetry business, do you really care where your CRM lives? Of course not. You just want to make some fine woodwork, and you could care less how the nuance of your application is architected as long as you can keep track of your pipeline, materials, billing, and receivables. In circumstances like this, as long as you find the right provider, you’re likely to derive the biggest benefit from a SaaS.

 

So, after asking yourself questions like this and looking at what makes your organization unique, SaaS still sounds appealing to you. But it’s not all lilacs in the springtime. As with anything in this world, there are trade-offs. Let’s quickly tick through a couple of the typical stumbling blocks for IT organizations looking at SaaS solutions.

 

  • Connectivity. I’ve seen SMBs run their business off consumer-grade cable connections. Not everyone can have carrier redundancy and multiple legs of dark fiber. If you’re a small shop and you’re going to heavily consume a SaaS application (or anything) requiring lots of data, you need to look at your connectivity. And if you realize you don’t have the network capacity, that needs to roll up into your TCO and ROI calculations to determine if the solution still makes fiscal sense.  If you’d like to dive deeper into connectivity questions and how they impact your cloud strategy, Thwack MVP rschroeder has provided some excellent thoughts on the matter on a previous post. Thank you rschroeder!

 

  • Lock-in. This really isn’t any different from a traditional on-premises solution, but needs consideration nonetheless. Are you OK hitching yourself to a SaaS provider, potentially for the long haul? Where other type of cloud offerings may provide an easier way to transition between providers, moving from one SaaS platform to another is likely to involve considerable people-hours and dollars. When asking yourself if you want to be married to a SaaS provider for the long haul, make sure you evaluate their strategy, motivations, and fiscal health to ensure a long and happy relationship. Again, this is no different from evaluating an on-premises solution, but since you’re going to be spending vital organization dollars, the evaluation needs to be completed regardless of direction.

 

  • Loss of control. This can take several forms. When you partner up with a SaaS provider, you’ll likely lose ability to control maintenance windows, patching, upgrades, and so on. Are you OK if the API you use suddenly changes parameters? These are solvable problems, but you need to evaluate your organization’s unique tolerance for some loss of control.

 

In our last cloud exploration, we asked the question “I’m Still On-Premises, Is That OK?” Yes, it is. If you’re feeling a little bit left behind, a SaaS platform can be an easy way to dip your toes into the cloud. By simplifying how you manage and approach commodity services, with a SaaS solution, you may find your IT superhero has transformed your business in ways unimaginable just a few years earlier.

Home this week and getting ready for Microsoft Ignite next week in Orlando. If you're at Ignite, please stop by the booth and say hello. I love talking data with anyone.

 

As always, here's a bunch of links I found interesting. Enjoy!

 

Microsoft beats Amazon to win the Pentagon’s $10 billion JEDI cloud contract

The most surprising part of this is an online bookstore thought they were the frontrunner. This deal underscores the difference between an enterprise software company with a cloud, and an enterprise infrastructure hosting company that also sells books.

 

Google claims it has achieved 'quantum supremacy' – but IBM disagrees

You mean Google would embellish upon facts to make themselves look better? Color me shocked.

 

Amazon migrates more than 100 consumer services from Oracle to AWS databases

"Amazon doesn't run on Oracle; why should you?"

 

“BriansClub” Hack Rescues 26M Stolen Cards

Counter-hacking is a thing. Expect to see more stories like this one in the coming years.

 

Berkeley City Council Unanimously Votes to Ban Face Recognition

Until the underlying technology improves, it's best for us to disallow the use of facial recognition for law enforcement purposes.

 

China’s social credit system isn’t about scoring citizens — it’s a massive API

Well, it's likely both, and a possible surveillance system. But if it keeps jerks away from me when I travel, I'm all for it.

 

Some Halloween candy is actually healthier than others

Keep this in mind when you're enforcing the Dad Tax on your kid's candy haul tomorrow night.

 

Every now and then my fire circle regresses to its former life as a pool.

 

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Mav Turner with suggestions on improving your agency’s FITARA score. FITARA rolls up scores from other requirements and serves to provide a holistic view of agency performance.

 

The most recent version of the scorecard measuring agency implementation of the Federal IT Acquisition Reform Act gave agencies cause for both celebration and concern. On the whole, scores in December’s FITARA Scorecard 7.0 rose, but some agencies keep earning low scores.

 

Agencies don’t always have the appropriate visibility into their networks to allow them to be transparent. All agencies should strive for better network visibility. Let’s look at how greater visibility can help improve an agency’s score and how DevOps and agile approaches can propel their modernization initiatives.

 

Software Licensing

 

Agencies with the lowest scores in this category failed to provide regularly updated software licensing inventories. This isn’t entirely surprising; after all, when licenses aren’t immediately visible, they tend to get forgotten or buried as a budget line item. Out of sight, out of mind.

 

However, the Making Electronic Government Accountable by Yielding Tangible Efficiencies Act (MEGABYTE Act) of 2016 is driving agencies to make some changes. MEGABYTE requires agencies to establish comprehensive inventories of their software licenses and use automated discovery tools to gain visibility into and track them. Agencies are also required to report on the savings they’ve achieved by optimizing their software licensing inventory.

 

Even if an agency doesn’t have an automated suite of solutions, it can still assess their inventory. This can be a great exercise for cleaning house and identifying “shelfware,” software purchased but no longer being used.

 

Risk Management

 

Risk management is directly tied to inventory management. IT professionals must know what applications and technologies comprise their infrastructures. Obtaining a complete understanding of everything within those complex networks can be daunting, but there are solutions to help.

 

Network and inventory monitoring technologies can give IT professionals insight into the different components affecting their networks, from mobile devices to servers and applications. They can use these technologies to monitor for potential intrusions and threats, but also to look for irregular traffic patterns and bandwidth issues.

 

Data Center Optimization

 

Better visibility can also help IT managers identify legacy applications to modernize. Knowing which applications are being used is critical to being able to determine which ones should be removed and where to focus modernization efforts.

 

Unfortunately, agencies discover they still need legacy solutions to complete certain tasks. They get stuck in a vicious circle where they continue to add to, not reduce, their data centers. Their FITARA scores end up reflecting this struggle.

 

Applying a DevOps approach to modernization can help agencies achieve their goals. DevOps is often based on agile development practices enabling incremental improvements in short amounts of time; teams see what they can realistically get done in three to five weeks. They prioritize the most important projects and strive for short-term wins. This incremental progress can build momentum toward longer-term goals, including getting all legacy applications offline and reducing costly overhead.

 

While visibility and transparency are essential for improvements across all these categories, FITARA scorecards themselves are also useful for shining light on the macro problems agencies face today. They can help illuminate areas of improvement, so IT professionals can prioritize their efforts and make a significant difference to their organizations. Every government IT manager should stay up-to-date on the scoring methodologies and how other agencies are doing.

 

Find the full article on Government Computer News.

 

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Following up on Will I Save Money by Moving to the Cloud? This post is part two of taking an atypical look at the public cloud and why you may or may not want to leverage it.

 

If you stop and think for a moment, cloud computing is still in its youth. EC2 has only been a public offering since 2006. Gartner first treated “the cloud” as a real thing about a decade ago. Seven years ago, I saw an early version of the IaaS offering for one of the big three, and it was almost unusable. From this perspective, “the cloud” is still maturing. The last several years have seen a dramatic evolution in the plethora of offerings.

 

Cloud has fundamentally changed the technological landscape, in a similar way as virtualization did a few years before. The benefits of cloud have had many going nuts for a while with cheers of “Cloud first!” and “We’re all in on cloud!” But what if you’re hesitant and wondering if the cloud is right for you and your organization? That’s OK and part of what we’ll explore today—some reasons you may or may not want to consider staying on-premises.

 

What’s Your Motivation?


In my mind, this is the biggest question you need to ask yourself, your org, and your stakeholders. Why are you moving to the cloud?

  • Is your business highly dynamic or does it need to scale rapidly?
  • Do you need to leverage an OpEx cost model for cash flow purposes?
  • Does your app need a refactor to leverage new technologies?

 

These are some of the best drivers for moving to the cloud and they bear more investigation.

 

  • Is your manager prone to moving you on to the next big thing, but only until the next big thing comes along?
  • Are you making the move simply because everyone else is doing it?
  • Do you believe you’ll inherently save money by shifting your infrastructure to cloud?


These things should give you pause, and you’ll need to overcome them if you want a successful cloud strategy.

 

Risk and Control


In my experience, most people hesitate to move to the cloud because of risk. Namely, your tolerance for risk within your information security program. It seems every week we hear news of a breach from an insecure cloud configuration. Now, is the cloud provider to blame for the breach? Almost certainly not. However, depending on several factors, most primarily your competencies. They may make it easier to leave yourself open to risk. Can the same situation perpetuate on-premises? Absolutely. Just remember, breaches happened before cloud was a thing. Ask yourself if any additional risk from changing providers/paradigms is within your tolerance level. If it is, great! You’re ready to continue your cloud analysis. If not, you need to determine a better move for you. Do you change your platform? Or do you change your risk tolerance?

 

What about where your data is and who has access to it? One of the early IaaS leaders, who’s still one of the top 10 providers, required administrative access to your servers. How particular are you and your organization about data locality and retention times? What happens to your data when you leave a provider? All these problems can be overcome, but before committing to any change in direction, ask yourself where you want to spend your resources: on changing how you mitigate risk in a new environment or dealing with a known commodity.

 

Competencies


What do you want your people to do and what do they want to do? Chances are your IT organization has somewhere between one and hundreds of technologists. Switching platforms requires retraining and leveling these people up. You need to consider your appetite for these tasks and weigh it against the costs of running your business should you stick with the status quo.

 

You should have a similar discussion around your toolsets. In the grand scheme of things, cloud is still relatively young. Many vendors aren’t ready to support a multi-cloud or hybrid cloud approach. As it relates to operations, do you need to standardize and have a single pane of glass or are you OK with different toolsets for different environments?

 

Finally, you need to think about how your strategies affect your staff and what it means for employee retention. If your business is cutting-edge, pushing boundaries, and disrupting without leveraging the cloud, you could end up with a people problem. Conversely, if you operate in a stable, predictable environment, you’ll need to consider whether disruption from changing your infrastructure is worth upending your team. Don’t get me wrong, you shouldn’t decide on a business strategy solely on employee happiness. On the other hand, engaged teams are routinely shown to be more effective, so it’s a factor to consider.

Cost


Costs as it pertains to cloud is a complicated matter, and you can look at it from many different angles. I explore several relevant questions in my post Will I Save Money by Moving to the Cloud?

 

All these questions aside, neither the cloud nor the legacy data center is going anywhere anytime soon. Heck, I just installed a mainframe recently, so you can trust most technology has varying degrees of stickiness. I want you and your organization to choose the right tool for the situation. Hopefully, considering a couple of different viewpoints helps you make the right choice.

 

The conversation continues in part three of the series as we take a look at SaaS in Beyond IaaS, Cloud for the SMB, the Enterprise, and Beyond!

Sascha Giese

VMworld EMEA 2019

Posted by Sascha Giese Employee Oct 27, 2019

It's this time of the year again! VMworld EMEA is in Barcelona again.

 

As one of the annual family reunions of all things data center, virtualization, and cloud, we can't miss it. This year the event is November 4 – 7, and you can find us at stand B524.

 

Last year, we had VMAN 8.3 out of the door a few months before the event, and now, we’re on version 8.5.

If things work out, we might be able to show you some of what we’re working on. And while we can't promise anything, it's looking good so far. Fingers crossed!

 

Tom and Patrick are attending MS Ignite in Orlando, which is happening at the same time, so it will be Leon and me, plus a group of experts from our EMEA offices, helping you in Barcelona.

Last November I saw this thing but unfortunately, I couldn’t find time to play with it:

 

Now, who wouldn’t want to play (I mean, do research) with a virtual shovel excavator? Will these guys return? Asking for a friend!

And Barcelona being Barcelona, there’s good food to look forward to again:

 

 

And for sure, we’re also looking forward to seeing you guys again!

In my last post I gave some background on one of my recent side projects: setting up and then monitoring a Raspberry Pi running Pi-Hole. In this post, I’m going to dive into the details of how I set up the actual monitoring. As a reminder, you can download these Server & Application Monitor (SAM) templates from the THWACK content exchange:

 

 

Also, the SolarWinds legal team has persistently insisted I remind you that these are provided as-is, for educational purposes only. The user agrees to indemnify the author, the author’s company, and the author’s third grade math teacher of any unexpected side effects such as drowsiness, nausea, ability to fly, growth of extra limbs, or attacks by flightless water fowl.

 

Setting Up Monitoring

As I said at the start of this series (**LINK**), on top of enjoying what Pi-Hole was doing for my home browsing experience, I also wanted to see if I could collect meaningful monitoring statistics from an application of this type.I started off with the basics—getting the services monitored. There weren’t many, and it looked like this once I was set up.

 

 

In the end, the services I needed to monitor were:

  • pihole-FTL
  • lighttpd
  • lightdm
  • dhcpd

 

Because monitoring services is sort of “basic blocking and tackling,” I’m not going to dig too deep here. Also, because I’ve provided the template for you to use, you shouldn’t have to break a sweat over it.

 

Next, I wanted to capture all those lovely statistics the API is providing. The only way I could do this was by building a script-based component in SolarWinds SAM. Now I’m no programmer, more like a script-kiddie, but I can sling code in a pinch, so I wasn’t overly worried…

 

…Until I realized I didn’t want to do this in Perl. It’s one thing to shoehorn Perl into making JSON calls because I wanted to prove a point. But since I wanted to put this template on THWACK for other folks to use, I had to do it in a scripting language that hadn’t celebrated more anniversaries than my wife and I had (31 years and going strong, thank you very much. My marriage, I mean, not Perl.). So, I took a good, hard look in the mirror and admitted to myself it was finally time to hunker down and write some code with PowerShell.

 

Jokes aside, for a project where I knew I’d be interacting with web-based API calls to return XML style data, I knew PowerShell was going to give me the least amount of friction, and cause others who used my code in the future the least amount of grief. I also knew I could lean on Kevin Sparenberg, Steven Klassen, and the rest of the THWACK MVP community when (sorry, if) I got stuck.

 

I’m happy to report it didn’t take me too long to get the core functionality of the script working—connect to the URL, grab all the data, and filter out the piece I want. It would look something like this:

$pi_data = Invoke-RestMethod -Uri "http://mypihole/admin/api.php" 
$pi_stat = $pi_data.domains_being_blocked 
Write-Host "Statistic: " $pi_stat

Now I needed not only to pretty this up, but also to add a little bit of error-checking and adapt it to the conventions SAM script components expect. Luckily, my MVP buddies rose to the challenge. It turns out Kevin Sparenberg had already created a framework for SAM PowerShell script components. This gem ensured I followed good programming standards and output the right information at the right time. You can find it here.

 

As I began to pull my basic script into the SAM template, I immediately ran into a problem: Raspberry Pi doesn’t run PowerShell, but the script was attempting to run there anyway.

 

After a bit of digging, I realized the problem. First, I was monitoring the Raspberry Pi itself using a SolarWinds agent. When you do that, SAM “presumes” you want to run script components on the target, instead of the polling engine. In most cases, this presumption is true, but not here. The fix is to change the template advanced options to run in agentless mode.

 

Once that was done, the rest was simple. For those reading this who have experience building script components, the process is obvious. For those of you who don’t have experience, trust me when I say it’s too detailed for this post, but I have plans to dig into the step-by-step of SAM script monitors later!

 

Looking Ahead

At the time I was playing with this, script monitors were the best way to get API data out of a system. HOWEVER, as you can see on the SAM product roadmap page, one of the top items is a built-in, generic API component.

 

I think I just found my next side project.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Jim Hansen about the benefits and challenges of edge computing. Ultimately, this new technology requires scrutiny and planning.

 

Edge computing is here to stay and it’s no wonder. Edge computing provides federal IT pros with a range of advantages they simply don’t have with more traditional computing environments.

 

First, edge computing brings memory and computing power closer to the source of data, resulting in faster processing times, lower bandwidth requirements, and improved flexibility. Edge computing can be a source of potential cost savings. With edge computing, data is processed in real time at the edge devices, therefore, it can help save computing cycles on cloud servers and reduce bandwidth requirements.

 

However, edge computing may also introduce its share of challenges. Among the greatest challenges are visibility and security, based on the decentralized nature of edge computing.

 

Strategize

 

As with any technology implementation, start with a strategy. Remember, edge devices are considered agency devices, not cloud devices, therefore they’re the responsibility of the federal IT staff.

Include compliance and security details in the strategy, as well as configuration management. Create thorough documentation. Standardize wherever possible to enhance consistency and ease manageability.

 

Visualization and Security

 

Remember, accounting for all IT assets includes edge-computing devices, not just those devices in the cloud or on-premises. Be sure to choose a tool to not only monitors remote systems, but provides automated discovery and mapping, so you have a complete understanding of all edge devices.

In fact, consider investing in tools with full-infrastructure visualization, so you can have a complete picture of the entire network at all times. Network, systems, and cloud management and monitoring tools will optimize results and provide protection across the entire distributed environment.

 

To help strengthen security all the way out to edge devices, be sure all data is encrypted and patch management is part of the security strategy. Strongly consider using automatic push update software to ensure software stays current and vulnerabilities are addressed in a timely manner. This is an absolute requirement for ensuring a secure edge environment, as is an advanced Security Information and Event Management (SIEM) tool to ensure compliance while mitigating potential threats.

 

A SIEM tool will also assist with continuous monitoring, which helps federal IT pros maintain an accurate picture of the agency’s security risk posture, providing near real-time security status. This is particularly critical with edge-computing devices which can often go unsecured.

 

Conclusion

 

The distributed nature of edge computing technology is increasing in complexity, with more machines, greater management needs, and a larger attack surface.

 

Luckily, as computing technology has advanced, so has monitoring and visualization technology, helping federal IT pros realize the benefits of edge computing without additional management or monitoring pains.

 

Find the full article on Government Technology Insider.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.