Skip navigation
1 2 3 4 Previous Next

Geek Speak

2,212 posts

As you know, we’re gearing up for THWACKcamp 2017, which promises to be our best yet. If you haven’t registered, you’ll want to get on that PRONTO! In our 100% free, virtual, multi-track IT learning event, thousands of attendees will have the opportunity to hear from industry experts and SolarWinds Head Geeks and technical staff. Registrants also get to interact with each other to discuss topics related to emerging IT challenges, including automation, hybrid IT, DevOps, and more.

 

We are continuing our expanded-session, two-day, two-track format for THWACKcamp 2017. SolarWinds product managers and technical experts will guide attendees through how-to sessions designed to shed light on new challenges, while Head Geeks and IT thought leaders will discuss, debate, and provide context for a range of industry topics. In my “If It’s Not in the Ticket, It Didn’t Happen” session, I'll be joined by SolarWinds Product Managers Kevin Sparenberg and Bryan Zimmerman to discuss best practices for streamlining the help desk request process.

 

Because I haven't worked in a help desk setting, and have likely been thought of as a “problem child” by a help desk or two, I’m sure I’ll appreciate the perspectives Kevin and Bryan will share in this session. I look forward to understanding the similarities and differences involved in supporting internal and external stakeholders, in addition to acting as an MSP in this same capacity. Tapping the wisdom they've accumulated from their individual experiences working on and leading help desk teams, Kevin and Bryan will offer help desk technicians advice on everything from the appropriate levels of information that are needed on the front end of the support process, to which tools can be used throughout the resolution workflow to help accelerate ticket resolution.

 

Check out our promo video and register now for THWACKcamp 2017! And don't forget to catch our session!

Logs are insights into events, incidents, and errors recorded over time on monitored systems, with the operative word being monitored. That’s because logging may need to be enabled for those systems that depend on defaults, or if you’ve inherited an environment that was not configured for logging. For the most part, logs are retained to maintain compliance and governance standards. Beyond this, logs play a vital role in troubleshooting.

 

For VMware® ESXi and Microsoft® Hyper-V® nodes, logs represent quintessential troubleshooting insights across that node’s stack, and can be combined with alerts to trigger automated responses to events or incidents. The logging process focuses on which logs to aggregate, how to tail and search those logs, and what analysis needs to look like with the appropriate reactions to that analysis. And most importantly, logging needs to be easy.

 

Configuring system logs for VMware and Microsoft is a straightforward process. For VMware, one can use the esxcli command or host profiles. For Microsoft, look in the Event Viewer under Application and Services Logs -> Microsoft -> Windows and specifically, Hyper-V-VMMS (Hyper-V Virtual Machine Management service) event logs. The challenge is efficiently and effectively handling the logging process as the number of nodes and VMs in your virtual environment increase in scale. The economies of scale can introduce multi-level logging complexities thereby creating troubleshooting nightmares instead of being the troubleshooting silver bullets. You can certainly follow the Papertrail if you want the easy log management button at any scale.

 

The question becomes, would your organization be comfortable with, and actually approve of, cloud-hosted log management, even with encrypted logging, where the storage is Amazon® S3 buckets? Let me know in the comment section below.

For most people these days, the word “hacking” conjures images of nefarious intruders attempting to gain illegal access to financial institutions, corporations, and private citizens’ computers for theft and profit. Exploitation of unsecured computer systems, cloud services, and networks make headlines daily, with large breaches of private consumer information becoming a regular event. Various studies predict the impact of global cybercrime, with one estimate from Cybersecurity Ventures predicting damages to exceed $6 trillion dollars by 2021. The impact of this is felt all over the world, with organizations rallying to protect their data, and spending over $80 billion in 2016 on cyber security.

 

There does remain some differentiation in the hacking world between “good” and “evil” and a variety of moral postures in between. Each of these terms being subjective and dependent on the point of view of the person using them, of course. There are the “good guys” – white hat hackers, and the “bad guys” – black hat hackers, and gray hats in-between. Terms and labels attributed to the traditional indicators of good and bad in Western movies and cowboys.

 

Tracing its Origins

 

Hacking in its infancy wasn’t about exploitation or theft. It also didn’t have anything to do with computers, necessarily. It was a term used to describe a method of solving a problem or fixing something using unorthodox or unusual methods. MacGyver, from the 1985 television show of the same name, was a hacker. He used whatever he had available to him at the moment, and his Swiss Army knife, to “hack” his way out of a jam.

The modern sense of the word hack has its origins dating back to the M.I.T. Tech Model Railroad Club minutes in 1955.

 

              “Mr. Eccles requests that anyone working or hacking on the electrical system turn off the power to avoid fuse blowing.”

 

There are some positive uses of the word in modern society, the website Lifehacker as one example, showing people how to solve everyday problems in unconventional, totally legal ways.

 

Captain Crunch

 

Early hacking took shape with tech-savvy individuals like John Draper, aka Captain Crunch, attempting to learn more about programmable systems, specifically phone networks. Coined “phreaking” at the time, these guys would hack the public switched phone system, often just for fun, or to learn as much as they could about them, and even for free phone calls. John Draper’s infamous nickname Captain Crunch was derived from the fact that a toy whistle found in Cap’n Crunch cereal, emitted a 2600 Hz tone that was used by phone carriers to cause a telephone switch to end a call, which left an open carrier line. This line could then be used to make free phone calls.

 

There were many such exploits on older telephone systems. In the mid-80’s I used to carry a safety pin with me at all times. Why? To make free phone calls. I didn’t understand the mechanism of how this worked at the time, but I knew that if I connected the pin end to the center hole of a pay-phone mouthpiece, and touched the other end to any exposed metal surface on the phone, often the handset cradle, you would hear a crackle or clicking noise, followed by a dial tone, and you would then be able to dial any number on the phone, without putting any money in it.

 

Later I would learn that this was due to the fact that older phone systems used ground-start signaling which required the phone line to be grounded to receive dial tone. Normally this grounding was accomplished with a coin inserted into the phone, which controlled a switch that would ground the line, but my method using a safety pin did the same thing.

 

I’m assuming of course, that the statute of limitations has run out on these types of phone hacks…

 

Hacking Motivation

 

Phone phreakers like Captain Crunch and even his friend Steve Wozniak (yes, the Woz) later on would develop these techniques further to hack the phone system and more often than not, for relatively harmless purposes. Draper cites a number of pranks they pulled through their phone hacking that included:

 

  • Calling the Pope to confess over the phone
  • Obtaining the CIA crisis hotline to the White House to let them know they were out of toilet paper
  • Punking Richard Nixon after learning his code name was “Olympus” when someone wanted to speak with him on the phone

 

Draper would eventually get caught and serve jail time for his phone escapades, but what he had done wasn’t done for profit or malicious reasons. He did it to learn how phone systems worked. Nothing more.

 

Kevin Mitnick, arguably the world’s most infamous hacker speaks in his books and his talks about the same thing. His adventures in hacking computer systems were done mostly “because he could” not because he thought there would be any big payoff from doing so. He found it a challenge and wanted to see how far he could get into some of these early networks and systems.

 

Hacking for the IT Professional

 

For the modern IT professional, hacking continues to hold a few different meanings. The first is the thing you must protect your network and your information from – malicious hacking. The next might be your approach to solving problems in non-traditional ways – hacking together a fix or solution to an IT problems. The next might be exposing yourself to the methods and techniques used by the black hat community in order to better understand and protect yourself from them – arguably the white hat hacking.

 

IT staff, especially those with responsibility for security can and should learn, practice, and develop some hacking skills to understand where their main vulnerabilities lie. How do we do this without getting arrested?

 

Over the next several posts, I'm going to discuss different options that you have, as the everyday IT pro, to learn and develop some practical, real-world hacking skills, safely and legally.

 

That said, I will offer a disclaimer here and in subsequent posts: Please check your local, state, county, provincial, and/or federal regulations regarding any of the methods, techniques, or equipment outlined in these articles before attempting to use any of them. And always use your own private, isolated test/lab environment.

 

Remember how much trouble Matthew Broderick got himself into in WarGames? And all he wanted to do was play some chess.

If cloud is so great, why is hybrid a thing?

 

Microsoft, Amazon, and Google are all telling us we don’t need our own data centers anymore. Heck, they’re even telling application developers that they don’t need servers anymore. The vendors will provide the underlying server infrastructure in a way that’s so seamless to the application, that the app can be moved around with ease. Bye, bye SysAdmins?

 

STOP. Time for a reality check.

 

It’s time to let loose on why the cloud is a bad, bad thing. I won’t even try to argue with you. I’ve worked in tech in a bank and for police and defense, so I know there are valid reasons. Here’s your chance to vent, I mean explain, why you’re throwing your eggs into a cloudy basket.

 

Cost 
It’s more than just a question of OPEX versus CAPEX. Sometimes it’s cash flow. Sometimes it’s return on investment. Maybe the numbers still don’t add up to use 20TB + of storage in the cloud versus some on-premises storage arrays?

 

Compliance
Banks, police, and defense aren’t the only ones bound by strict regulations. Do you have sensitive data or industry red tape that states that the buck stops with you and data can’t go outside of your walls (or your full control)?

 

Concealment
Because data security doesn’t start with a C. Vendor X is a third-party so maybe we don’t trust them with our data. Do we believe that we can secure our own systems the best because we have a vested interest here? After all, it is our business on the line, our customer data, our reputation. Or maybe there is a chance that Vendor X is snooping around in our data for their gain?

 

Control
Even if you did assume that Vendor X is better at security and data breach detection that you are (surely not!), they still have access to do stupid things like not manage servers and not have reliable backups. If it does all go horribly wrong, maybe your bosses legally need to have someone in-house to shout at (or fire) and can’t wave that off with a cloud services agreement? Do you have a locked down environment controlled by group policy that you can't replicate in the cloud unless you ran a full Windows server there, anyway?

 

Connectivity
Now I know enterprise people that laugh at this one, because surely everybody has great, fast, redundant internet. This is not always the case, especially in smaller organizations. Lack of network bandwidth or a reliable connection are the first roadblocks to going anywhere near the cloud. Add to that the complexity of VPNs for specific applications in some industries, and even Google, Amazon, or Microsoft might not be up to the task. Or maybe you need to add other firewall services, which makes the whole endeavor cost prohibitive. 

 

Another aspect to connectivity is integration with other systems and data sources. Maybe you've got some seriously complex systems that are a connected piece of a bigger puzzle and cloud can't support or replace that?

 

Concern about vendor lock-in
I don’t envy someone who has to make a choice about which cloud to use. Do you spread your risk? Can you easily move a server instance from one cloud to another? And we haven’t talked about SaaS solutions and data export/imports.

 

See what I did there?

 

So, go on. Tell me what’s holding you back from turning off every single on-premises server. I promise I’ll read the comments. I’m expecting some good ones!

This week's Actuator comes to you in advance of my heading to Austin for some THWACKcamp filming next week. This filming may or may not involve a bee costume. I'm not promising anything.

 

Microsoft Launches Windows Bug Bounty Program Because Late Is Better Than Never

Interesting that this took as long as it did, but it is wonderful to see Microsoft continue to make all the right moves.

 

Passwords Evolved: Authentication Guidance for the Modern Era

Great summary from Troy Hunt about the need for passwords to evolve.

 

A Wisconsin company will let employees use microchip implants to buy snacks and open doors

Or, forget passwords and just go to chip implants. I'm kinda shocked this isn't in use at a handful of hedge funds I know.

 

First Human Embryos Edited in U.S.

Time to upgrade the species, I guess.

 

Indoor robots gaining momentum - and notoriety | The Robot Report

Including this link mostly because I never knew "The Robot Report" existed, and now I can't look away.

 

The Worst Internet In America

I thought for sure this was a report about my neighborhood.

 

Why automation is different this time

Set aside 10 minutes to watch this video. I want you to understand why your job roles are going to change sooner than you might think.

 

I'm not sure this is legit, but the store did have a Mr. Fusion for sale:

By Theresa Miller and Phoummala Schmitt

 

Hybrid IT has moved from buzzword status to reality. More organizations are realizing that they are in a hybrid world. Including any potential impact, you should be thinking about the following: What is hybrid IT? Why do you care? What does it mean for the future of IT?

 

Hybrid IT and the Organization

 

The introduction of the cloud has made organizations wonder what hybrid IT means to them and their business applications. Hybrid IT is any combination of on-premises and cloud in just about any capacity. Cloud for Infrastructure as a Service (IAAS), Software as a Service (SAAS), Platform as a Service (PAAS), and any other cloud option you may choose. The moment you choose a cloud to provide something in your enterprise, you are not in a mode of hybrid IT.

 

With hybrid IT comes the same level of responsibility as on-premises. Moving an application to the cloud doesn’t mean that the cloud provider is responsible for backups, monitoring, software updates, or security unless that is part of your agreement with that cloud provider. Make sure you know your agreement and responsibilities from the beginning.

 

Hybrid IT can provide cost savings, and while some may argue otherwise, it comes down to a budget shift to operational cost. The true value is that you remove the capital overhead of maintaining your own servers, heating, cooling, and sometimes even software updates.

 

Is there value to a hybrid configuration within a Microsoft Exchange deployment?

 

Looking back, it seems Microsoft was one of the great innovators when it came to email in the cloud. It wasn't exactly successful in the beginning, but today this option has grown into a very stable product, making it the option of choice for many organizations. So how does email factor into hybrid? I wonder if migrating the Exchange online through hybrid is necessary? This is due to the ability to failback, the ability to keep some email workloads onsite, and the ability to create a migration experience similar to an on-premises deployment. These options work to create a more seamless migration experience overall.

 

How you ask? Here are some of the technical functionalities that are vital to that seamless experience.

 

  • Mail routing between sites – When correctly configured, internal and external routing appear seamless to the end-user
  • Mail routing in shared namespace – This is important to your configuration if the internal and external SMTP domain remains the same
  • Unified Global Address List – Contributing to the seamless user experience, the user sees all of their coworkers in one address list, regardless of whether or not they are on-premises or in the cloud
  • Free/Busy is shared between on-premises and cloud - This also contributes to the seamless user experience by featuring a visible calendar showing availability, no matter where the mailbox exists
  • A single Outlook web app URL for both on-premises and cloud – If your organization uses this functionality, your configuration can be set up with the same URL, regardless of mailbox location

 

How about hybrid IT for VDI?

 

VDI has been showing significant growth in organizations. It is becoming more interesting to companies with an estimated growth rate of 29% over the next couple of years. So what about hybrid? Well, to date we are still seeing the strongest options for VDI being within on-premises product options. That being said, there are some cloud options that are getting stronger that can definitely be considered.

 

Many of these options do not have strong plans for hybrid, but are rock solid if you are looking for one or the other: on-premises or cloud, but not both. So, what are the gaps for hybrid? To date, many of these options have proprietary components that only work with certain cloud providers. Connector options between on-premises and cloud are still in the early stages, and there needs to be more consideration around applications that are on-premises that need to work in the cloud.

 

Hybrid IT - Ready or not

 

So, if you are already moving just a single application to the cloud, you are embarking on the hybrid IT journey. When moving to Microsoft Exchange Online, be sure to use hybrid for your deployment. Last but not least, if you are ready for VDI, choose either on-premises or cloud only to get started. Also, be prepared for some bumps in the road if your applications are on-premises and you chose to put your VDI in the cloud. This is because this option is very new and every application has different needs and requirements.

 

If you would like to learn more about hybrid IT for VDI and Exchange, check out our recent webcast, "Hybrid IT: Transforming Your IT Organization. And let us know what you think!

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

In some ways, data has become just as much a colleague to federal IT managers as the person sitting next to them. Sure, data can’t pick up a burrito for you at lunchtime, but it’s still extraordinarily important to agency operations. Data keeps things going so that everyone in the agency can do their jobs – just like your fellow IT professionals.

 

Unfortunately, as my colleague Thomas LaRock wrote last year, too many people still treat data as a generic commodity instead of a critical component of application performance. But applications are at the heart of just about everything government employees do, and those applications are powered by databases. If there’s a problem with an application, it’s likely due to an underlying performance issue with the database it runs on.

 

As data and applications continue to become more intertwined, it’s time to get serious about employing strategies to ensure optimal database performance. Here are five tips to get you started:

 

1. Integrate DBAs into the IT mix

 

It may seem incredible, but to this day many agencies are still arranged in siloes, with different teams shouldering separate responsibilities. As such, despite their importance to network and application performance, many DBAs still operate in their own bubbles, separate from network administrators and IT managers. But, IT and DBA teams should work together to help ensure better application performance and availability for everyone’s sake.

 

2. Establish performance baselines before you start monitoring

 

Your strategy starts with monitoring for performance issues that may be causing problems with an agency’s applications. Before you even begin this, you’ll need to set up baselines to measure against. These baselines will allow you to track the code, resource, or configuration change that may be causing the anomalies and fix the issues before they become larger problems.

 

3. Start monitoring — but take it to the next level

 

Take things a step further by digging deeper into your data. Use real-time data collection and real-time monitoring in tandem with traditional network monitoring solutions to improve overall database performance, and maintain network and data availability. Incorporate tools with wait-time analysis capabilities to help identify how an application request is being processed, and which resources that application may be waiting on. This can help pinpoint the root cause of performance issues so you can see if they’re associated with your databases.

 

4. Then, go even further — into your application stack and beyond

 

Applications are co-dependent upon each other. When one slows down or fails, it could adversely affect the entire stack. Therefore, you’ll want to use monitoring solutions that provide visibility across your entire application stack, not just sections or individual applications. This includes software, middleware, and, especially, databases. This type of monitoring can help you zero in on potential issues wherever they may reside in your organization, and make it much easier to address them to minimize downtime and keep things rolling.

 

5. Don’t stop — be proactive and continuously monitor

 

Proactive and continuous monitoring is the best approach, and must involve software and teamwork. Start with deploying solutions that can automatically monitor applications and databases 24/7. Make sure that everyone is on the same page and appreciates end-user expectations in regards to page load and response times. Know that the work the team does impacts the entire agency, and can directly influence – positively or negatively – their colleagues’ efforts toward achieving their agencies’ goals.

 

Databases and applications will continue to play a part in these efforts, and you’ll be working alongside them for as long as you’re in federal IT. They might not be able to chat with you over a cup of coffee, but they’ll always be there for you – until they’re not.

 

Don’t let it get to that point. Do whatever it takes to keep your databases and applications working just as hard as you.

 

Find the full article on GovLoop

If budgets allowed, I think all of us in IT might spend a month every year attending events. And while Bruno Mars was apparently a great way to wrap-up Cisco Live 2017, the real draw for IT professionals, is and always will be, education. Technology is evolving more rapidly than ever. Cloud, DevOps, and even diversifying, specialized, on-premises gear like Flash, extorts more and more demand for learning just to keep up. Of course, the sad reality is that travel and education budgets don’t support a month of fun conference travel every year. This conundrum led to the genesis of THWACKcamp.

 

For 2017, SolarWinds will host its sixth THWACKcamp, a two-day, live, virtual learning event with eighteen sessions split into two tracks. This year, THWACKcamp has expanded to include topics from the breadth of the SolarWinds portfolio: there will be deep-dive presentations, thought leadership discussions, and panels that cover more than best practices, insider tips, and recommendations from the community about the SolarWinds Orion suite. This year we also introduce SolarWinds Monitoring Cloud product how-tos for cloud-native developers, as well as a peek into managed service providers’ approaches to assuring reliable service delivery to their subscribers. A holdover from 2016 is the senior executive keynote, with a look into the future of SolarWinds to see where its products are headed. THWACKcamp 2017 will be better than ever, so be sure you’re there October 18-19.

 

 

Registration is now open! Just click on the link below. If you like, you can also download calendar reminders to make sure you don’t miss a single opportunity to interact live with SolarWinds staff, technology experts, and, of course, the amazing members of the THWACK community. It wouldn’t be a THWACK event without lots of cool giveaways and live geek fun, so you’ll want to be there for that, too.

 

Thousands of IT professionals of all kinds attend THWACKcamp every year, whether they’re SolarWinds customers or not. Our goal, as always, is to provide expert advice on technology, skills, and career development to keep you ahead of the curve and loving your work.

 

See you there! Register now >>

In a recent post on the state of data security, I discussed how the nature of our privacy online and the security of our personal information is at serious risk and only getting worse. Now, instead of focusing on the problem, I’d like to focus on some helpful solutions we can implement at the individual, organizational, and even state level.

 

We need to start with the understanding that there’s no such thing as absolute security. All our solutions are small pieces to an overall security awareness strategy—this means there’s no silver bullet, no single vendor solution, and no magical security awareness training seminar that will solve all our problems.

 

However, when we have the combination of proper education and small implementations of both technology and culture, our overall security posture becomes more robust.

 

The first thing we need to get in our heads is that we’re typically more reactive than proactive. How often have you attended a security awareness seminar at work or implemented some sort of patch or security technology in response to a threat on the news, rather than in anticipation of future threats?

 

If we’re only ever responding to the threat of the day, we’ve already lost.

 

Individual level

First, there is little to no reason why any institution other than a bank or credit bureau needs a social security number, or really, that much personal information in general. Sometimes a service requires a home address and credit card number for shipping, but that’s where it should end. For e-commerce, it’s better to use a low-limit credit card dedicated only to online purchases rather than a card with a very high limit, or worse yet, a check card number directly attached to a checking account. In this way, there is at least a buffer between a potential thief and our actual bank account.   

 

Second, we should be using a variety of strong passwords rather than a single, easy-to-remember password for all our online logins. Personally, I believe the technology exists for passwords to be phased out eventually, but until that happens, our passwords should be complex, varied, and changed from time to time.

 

Third, we can choose browsers that don’t track our movement online, and we can opt for email services that both encrypt and honor the privacy of the content of our messages. Granted, there is certainly a trust element there, and ISPs still know what we’re doing, but remember that each small piece we add is part of the bigger picture of our overall security posture.

 

And of course, we should be using all the best practices, such as utilizing a firewall, locking our personal computers and encrypting their hard drives, keeping passwords private, and deleting old and unused online accounts (such as from MySpace or AOL).

 

Organizational level

At an organizational level—whether that be a company, service provider, municipality, etc.—the cost and complexity increases dramatically, especially when dealing with others’ personal information, Whether it's employees, customers, or members of a social community, organizations must be especially proactive to protect the data they store within their infrastructure.

 

First, a vehement adherence to security best practices must be ingrained in the culture of the executive staff and every single employee in the company. This includes the IT staff. Because internet usage is now generally very transactional, engineers need to be educated on how attackers actually hack systems and reverse engineer technology. This is security awareness training for IT.

 

Second, companies must encrypt data both at rest and in motion on the backend. Yes, it’s more work and money, but this alone will mitigate the risk of data misuse in the event of a data loss. This involves encrypting server hard drives and using only encrypted channels for data in motion. This can become very cumbersome with regard to east-west traffic within a data center itself, but the principle should be applied where it can be.

 

Third, organizations storing others’ personal information should consider decentralizing data as much as possible. This is also expensive because it requires the infrastructure and culture shift within an IT department used to centralizing and clustering resources as much as possible. Small and medium-sized businesses are especially vulnerable because attackers know they are easier targets, so they especially need to make the educational and cultural changes to protect data.

 

In a recent article, I discussed the top 10 network security best practices organizations should stick to. This includes keeping up with current patches, making use of good endpoint protection, using centralized authentication, using a decent monitoring and logging solution, staying on top of end-user training, and preventing or limiting the use of personal devices on the corporate network. These are ways to prevent data leaking and an outright breach.

 

Government oversight

Our municipalities and larger government entities should be following these principals for their internal infrastructures as well, but how does government oversight factor into our overall security posture?

 

Government regulations for financial institutions already exist, but what about other industries such as e-commerce, social media providers, our private employers, etc.? This is extremely difficult because laws differ from state to state and country to country, so how can government oversight help protect our personal information online?

 

This is a debatable topic because it involves the question of how much involvement government should have in the private sector and in our private lives. However, there are some things that governments can do that don’t impede on privacy but help to ensure security.

 

First, there should be legislation governing the securing of third-party data. Data is the new oil, but it’s also a major liability. So just as it’s illegal for someone to steal a valuable widget off a store shelf, there should be explicit legislation and subsequent consequences for the theft or mishandling of our information. This is difficult, because the bad guys are the thieves, not always the company that was breached.

 

This means there need to be better methods to track stolen data after a breach in order to capture and penalize the attacker. However, without tracking data after a breach, the entity left holding the bag is the organization that suffered the data breach and the individual whose information was stolen.

 

We need to determine where the responsibility lies. Is it all on the end-user? Is it all on companies? Is it by government legislation? The reality is that it’s a decentralized responsibility in the sense that companies and governments store our information and are therefore responsible for keeping it safe. In most cases, though, we’ve also chosen to share that information, so we bear responsibility as well.

 

To some extent, governments can regulate the manner in which third-party data is stored. This would increase overall security posture and penalize organizations that mishandle our private information. Ultimately, the criminal is the thief, but this way, the organizations that handle our data would have the incentive to improve their security posture as well.

 

In conclusion

We need to remember that there’s no such thing as absolute security. The entire security paradigm in our society must change. Rather than being reactive and relying on stopgap measures, security awareness today must begin in the elementary school classroom and continue into the boardroom. Our solutions are small pieces to an overall security awareness strategy, and this is okay. And if done proactively and dutifully, they will increase our overall security posture and decrease the risk to our information online.

 

Here I outlined only a few pieces to this puzzle, so I’d love to hear your thoughts and additional suggestions in the comments.

Without a doubt, we're at a tipping point when it comes to security and the Internet of Things (IoT). Recently, security flaws have been exposed in consumer products, including children's toys, baby monitors, cars, and pacemakers. In late October 2016, Dyn®, an internet infrastructure vendor, suffered a malicious DDoS attack that was achieved by leveraging malware on IoT devices such as webcams and printers that were connected to the Dyn network.

 

No, IoT security concerns are not new. In fact, any device that's connected to a network represents an opportunity for malicious activity. But what is new is the exponential rate at which consumer-grade IoT devices are now being connected to corporate networks, and doing so (for the most part) without IT's knowledge. This trend is both astounding and alarming: if your end user is now empowered to bring in and deploy devices at their convenience, your IT department is left with an unprecedented security blind spot. How can you defend against something you don't know is a vulnerability?

 

BYOD 2.0


Right now, most of you are more than likely experiencing a flashback to the early days of Bring Your Own Device (BYOD) - when new devices were popping up on the network left and right faster than IT could regulate. For all intents and purposes, IoT can and should be considered BYOD 2.0. The frequency with which IoT devices are being connected to secured, corporate networks is accelerating dramatically, spurred on in large part by businesses' growing interest in leveraging data and insights collected from IoT devices, combined with vendors' efforts to significantly simplify the deployment process.

 

Whatever the reason, the proliferation of unprotected, largely unknown and unmonitored devices on the network poses several problems for the IT professionals tasked with managing networks and ensuring the organizational security.

 

The Challenges


First, there are cracks in the technology foundation upon which these little IoT devices are built. The devices themselves are inexpensive and the engineering that goes into them is more focused on a lightweight consumer experience as opposed to an enterprise use case that necessitates legitimate security. As a result, these devices re-introduce new vulnerabilities that can be leveraged against your organization, whether it's a new attack vector or an existing one that's increased in size.

 

Similarly, many consumer-grade devices aren't built to auto-update, and so the security patch process is lacking, creating yet another hole in your organization's security posture. In some cases, properly configured enterprise networks can identify unapproved devices being connected to the network (such as an employee attaching a home Wi-Fi router), shut down the port, and eradicate the potential security vulnerability. However, this type of network access control (NAC) usually requires a specialized security team to manage and is often seen only in large network environments. For the average network administrator, this means it is of premier importance that you have a fundamental understanding of and visibility into what's on your network - and what it's talking to - at all times.

It's also worth noting that just because your organization may own a device and consider it secure does not mean the external data repository is secure. Fundamentally, IoT boils down to a device inside your private network that is communicating some type of information out to a cloud-based service. When you don't recognize a connected device on your network and you're unsure where it's transmitting data, that's a problem.

 

Creating a Strategy and Staying Ahead


Gartner® estimates that there will be 21 billion endpoints in use by 2020. This is an anxiety-inducing number, and it may seem like the industry is moving too quickly for organizations to slow down and implement an effective IoT strategy.

 

Still, it's imperative that your organization does so, and sooner rather than later. Here are several best practices you can use to create an initial response to rampant IoT connections on your corporate network:

  • Create a vetting and management policy: Security oversight starts with policy. Developing a policy that lays out guidelines for IoT device integration and connection to your network will not only help streamline your management and oversight process today, but also in the future. Consider questions like, "Does my organization want to permit these types of devices on the corporate network?" If so, "What's the vetting process, and what management processes do they need to be compatible with?" "Are there any known vulnerabilities associated with the device and how are these vulnerabilities best remediated or mitigated?" The answers to these questions will form the foundation of all future security controls and processes.
    If you choose to allow devices to be added in the future, this policy will ideally also include guidelines around various network segments that should/should not be used to connect devices that may invite a security breach. For example, any devices that request connection to segments that include highly secured data or support highly critical business processes should be in accordance with the governance policy for each segment, or not allowed to connect. This security policy should include next steps that go beyond simply "unplugging" and that are written down and available for all IT employees to access. Security is and will always be about implementing and verifying policies.
  • Find your visibility baseline: Using a set of comprehensive network management and monitoring tools, you should work across the IT department to itemize everything currently connected to your wireless network and if it belongs or is potentially a threat. IT professionals should also look to leverage tools that provide a view into who and what is connected to your network, and when and where they are connected. These tools also offer administrators an overview of which ports are in-use and which are not, allowing you to keep unused ports closed against potential security threats and avoid covertly added devices.
    As part of this exercise, you should look to create a supplemental set of whitelists - lists of approved machines for your network that will help your team more easily and quickly identify when something out of the ordinary may have been added, as well as surface any existing unknown devices your team may need to vet and disconnect immediately.
  • Establish a "Who's Responsible?" list: It sounds like a no-brainer, but this is a critical element of an IoT management strategy. Having a go-to list of who specifically is responsible for any one device in the event there is a data breach will help speed time to resolution and reduce the risk of a substantial loss. Each owner should also be responsible for understanding their device's reported vulnerabilities and ensuring subsequent security patches are made on a regular basis.
  • Maintain awareness: The best way to stay ahead of the IoT explosion is to consume updates about everything. For network administrators, you should be monitoring for vulnerabilities and implementing patches at least once a week. For security administrators, you should be doing this multiple times a day. Your organization should also consider integrating regular audits to ensure all policy-mandated security controls and processes are operational as specified and directed. At the same time, your IT department should look to host some type of security seminar for end-users where you're able to review what is allowed to be connected to your corporate network and, more importantly, what's not allowed, in order to help ensure the safety of personal and enterprise data.

 

Final Thoughts

 

IoT is here to stay. If you're not already, you will soon be required to manage more and more network-connected devices, resulting in security issues and a monumental challenge in storing, managing, and analyzing mountains of data. The risk to your business will likely only increase the longer you work without a defined management strategy in place. Remember, with most IoT vendors more concerned about speed to market than security, the management burden falls to you as the IT professional to ensure both your organization and end-users' data is protected. Leveraging the best practices identified above can help you begin ensuring your organization is getting the most out of IoT without worrying (too much) about the potential risks.


This is a cross-post of IoT and Health Check on Sys-Con.

sqlrockstar

The Need for Speed

Posted by sqlrockstar Employee Jul 27, 2017

 

I’ve got a quick quiz for you today. Which scenario is worse?

 

Scenario 1:

SQL Statement 1 executes 1,000 times, making end-users wait 10 minutes. 99% of the wait time for SQL Statement 1 is “PAGEIOLATCH_EX.”

 

Scenario 2:

SQL statement 2 executes one time, also making the end-users wait 10 minutes. 99% of the wait time for SQL Statement 2 is “LCK_M_X.”

 

The answer is that both are equally bad, because they made another user wait 10 minutes. It doesn’t matter to the end-user if the root cause is disk, memory, CPU, network, or locking/blocking. They only care that they have to wait ten minutes.

 

The end-users will pressure you to tune the queries to make it faster. They want speed. You will then, in turn, try to tune these queries to reduce their run duration. Speed and time become the measuring sticks for success.

 

Many data professionals put their focus on run duration. But by focusing only on run duration, they overlook the concept of throughput.

 

Time for another quiz: Which scenario is better?

 

Scenario 3:

You can tune SQL Statement 3 to execute 1,000 times, and run for 30 seconds.

 

Scenario 4:

You can tune SQL Statement 4 to execute 10,000 times, and run for 35 seconds.

 

The extra five seconds of wait time is a tradeoff for being able to handle 10x the load. I know I’d rather have 10,000 happy users than 9,000 unhappy ones. The trouble here is that being able to tune for throughput can take a lot more effort than tuning for duration alone. It’s more important to get up and running than it is to design for efficiency.

 

Once upon a time, we designed systems to be efficient. This, of course, led to a tremendous number of billable hours as we updated systems to avoid the Y2K apocalypse. But today, efficiency seems to be a lost art. Throwing hardware at the problem gets easier with each passing day. For cloud-first systems, efficiency is an afterthought because cloud makes it easy to scale up and down as needed.

 

Building and maintaining efficient applications that focus on speed as well as throughput requires a lot of discipline. Here’s a list of three things you can do, starting today, for new and existing database queries.

 

Examine current logical I/O utilization

To write queries for scale, focus on logical I/O. The more logical I/O needed to satisfy a request, the longer it takes to run and the less throughput you will have available. One of the largest culprits for extra logical I/O is the use of incorrect datatypes. I put together a couple of scripts a while back to help you with this. One script will look inside a SQL Server® database for integer values that may need to have their datatypes adjusted. Or you can run this script to check the datatypes currently residing in memory, as those datatypes are likely to be the ones you should focus on adjusting first. In either case, you are being proactive in the measuring and monitoring of the data matching the defined datatype.

 

Take a good look at what is in your pipe

After you have spent time optimizing your queries for logical I/O, the last thing you want is to find that you have little bandwidth available for data traffic because everyone in the office is playing Pokemon®. We started this post talking about speed, then throughput, and now we are talking about capacity. You need to know what your maximum capacity is for all traffic, how much of that traffic is dedicated to your database queries, and how fast those queries are returning data to the end-users.

 

Test for scalability

You can use tools such as HammerDB and Visual Studio to create load tests. After you have optimized your query, see how it will run when executed by simultaneous users. I like to take a typical workload and try running 10x, 25x, 50x, and 100x test loads and see where the bottlenecks happen. It is important to understand that your testing will not likely simulate production network bandwidth, so keep that metric in mind during your tests. You don’t want to get your code to be 100x capable only to find you don’t have the bandwidth.

 

Summary

When it comes to performance tuning your database queries, your initial focus will be on speed. In addition to speed, it is necessary to consider throughput and capacity as important success metrics for tuning your queries. By focusing on logical I/O, testing for scalability, and measuring your network bandwidth, you will be able to maximize your resources. 

If you love technology and enjoy learning, then working in IT without losing your mind will be a breeze. In my past 20 years of working in IT, I've personally found that if you are willing to keep learning, this will lead you down a very interesting journey—and one of great success.

 

How to choose what to learn

 

In IT, there is so much to choose from that it can leave your head spinning if you don’t know where to focus. I recently became part of a mentoring program, and my mentee struggled with this very problem. She is committed to virtualization, but the emergence of cloud left her feeling confused. Every virtualization provider is moving to some form of cloud, but in my experience, not all virtualization platforms in the cloud are created equal. She's also not from the U.S., which is another factor—some technologies are just more geographically adopted in some regions on the world than others. So how did we decide what she will learn next? We knew for certain it would be a cloud-related technology. That being said, there was much to consider.  So, we talked through some key questions, which I would also recommend you consider.

 

  • Which cloud providers meet the security requirements of your region in the world? Yes, this may take some research, but remember: you are looking to learn a new technology that will help you advance your career. This requires understanding which cloud providers are being adopted successfully in your area. The choice you make here should align with industry trends, as well as what will be of most value to your current employer and any potential future employers.

 

  • Is there a way for you to try the cloud providers offering? There is nothing worse than investing too much time into learning something that you ultimately may not enjoy, or don’t believe will meet your customers'/employer’s needs. Get your hands on the technology and spend a few hours with it before committing to learning it fully. If you enjoy it, then take the gloves off and get your hands dirty learning it.

 

  • Certification? If you see value in the technology and are enjoying learning it, then look for a certification track. I personally do not believe that certification is necessary for all things, but if you are passionate about the cloud provider's offering, using certification to learn the product will go along way—especially if you don’t yet have the real-world experience with the technology. Certification opens doors, and being able to put the certification down on the resume can help you get started using it in the real world.

 

So, take some time to answer these questions before you dive into what you will learn next in IT. Answering these key questions, regardless of your technical interests, will bring you one step closer to deciding what to learn next without losing your mind.

 

Technology changes fast

 

The pace of technology changes fast, and having a strategic approach to your technical one will keep your mind intact. Embrace, love, and learn more about technology and IT. It’s a great ride!

aguidry

Sysadmania! Is Upon Us!

Posted by aguidry Employee Jul 27, 2017

SysAdmin Day is upon us, you SysAdmaniacs!

 

It’s your day! We know how you may feel underappreciated at times throughout the year, so we went all out and created a board game in your honor. That’s right. Now, after another annoying day resetting passwords and resolving bottlenecks caused by I/O-heavy apps, you can unwind with a couple of friends and SysadMANIA!

 

Here's a little sneak peak of what you can expect (well something close to it):

 

 

Our game designers created SysadMANIA! with you in mind. We know what it feels like to be ignored when everything works and blamed when something breaks. We hear you! So we took your pain and turned it into hilarious fun. Head Geeks Patrick Hubbard and Leon Adato took it for a test run (tough job, huh?) and loved the experience. They agreed that the game provides a laugh-out-loud, relatable, therapeutic, thoughtful, super fun time.

 

patrick.hubbard said, “When a game uses socialization and chance cards to escape the game boss, you realize that something very special is happening. It’s Cards Against IT meets D&D meets Sorry! meets Battleship… and I laughed every turn. I laughed because the work cards are funny and Brad is truly terrible, but also because it’s cathartic. If you’ve ever suffered the indignity of having a desk in the basement, falling off a ladder hanging an AP, or failing a backup recovery, you’ll laugh even harder.”

 

 

It taps into our humanity and celebrates the glory and triumph of IT over the frustration of tedium and stupid user questions. I can’t wait for the first expansion pack! Please let it be cloud.”

 

adatole loves it, too. He says, “Since my family observes a full day, every single week, of no electronics, we play a LOT of games. On a typical Saturday, you’ll find us around the table playing all kinds of games: Monopoly, Gin Rummy, Ticket to Ride, King of Tokyo, even Munchkins, or Exploding Kittens.

 

"When the Geeks sat down and unboxed SysadMANIA! for the first time, I figured it was a one-off joke. A game about life in IT. How quaint. But the truth is that there’s a GAME here. The combination of different personas displaying various strengths and weaknesses affect your ability to get through work challenges.

 

"What I liked best was that SysadMANIA! can be run as your typical ‘every player for themselves’ game, but happens to be more interesting – and truer to the spirit of IT pros everywhere – when played cooperatively à la Forbidden Island.”

 

Check out this video, as patrick.hubbard, kong.yang, chrispaap, and stevenwhunt battle through SysadMANIA:

 

 

So, there you have it. Glowing praise for SysadMANIA! We hope the day-long showing of respect and admiration by your colleagues is just as bright. Oh and if you want to join in on the SysadMANIA fun, visit: http://go.solarwinds.com/sysadmania for your chance to win some cool prizes, or even the actual board game!

certified.jpg

Recently, Head Geek Destiny Bertucci ( Dez ) and I talked about certifications on an episode of SolarWinds Lab. For almost an hour we dug into the whys and hows of certifications. But, of course, the topic is too big to cover in just one episode.

 

Which is why I wanted to dig in a little deeper today. This conversation is one that you can expect I'll be coming back to at various points through the year. This dialogue will be informed by my experiences both past and present, as well as the feedback you provide as we go on. I want this to be a roundtable discussion, so at the end we'll all have something closer to a 360-degree view. My goal is to help IT professionals of all experience levels make an informed choice about certs: which ones to pursue, how to go about studying, where to set expectations about the benefits of certifying, and even tricks for preparing for and taking the exams.

 

For today's installment, I thought it might make sense to start at the beginning, meaning a bit of a walk down Certification Lane to look at the certs I already have, when I got them, and why.

 

To be clear, I don't mean this to be a #humblebrag in any way. Let's face it. If you watched the episode, you know that there are other Geeks with WAY more certifications than me. My point in recounting this is to offer a window into my decision-making process and, as I said, to get the conversation started.

 

My first tech certification was required by my boss. I was working at a training company that specialized (as many did at the time) in helping people move from the typing pool where they used sturdy IBM selectrics to the data processing center where WordPerfect was king. My boss advised me that getting my WPCE (WordPerfect Certified Resource) cert would accomplish two things:

 

  1. it would establish my credibility as a trainer
  2. if I didn't know a feature before the test, I sure as heck would after.

 

This was not your typical certification test. WordPerfect shipped you out a disk (A 5.25" floppy, no less) and the test was on it. You had up to 80 hours to complete it and it was 100% open book. That's right, you could use any resources you had to finish the test. Because at the end of the day, the test measured execution. Instead of just asking "what 3-keystroke combination takes you to the bottom of the document" the exam would open a document and ask that you DO it. A keylogger ensured the proper keystrokes were performed.

 

(For those who are scratching their heads, it's "Home-Home-DownArrow", by the way. I can also still perfectly recall the 4-color F-key template that was nearly ubiquitous at the time.

 

WordPerfect 4.2 - keyboard template - top.jpg

 

And my boss was right. I knew precious little about things like macros before I cracked open the seal on that exam disk. But I sure knew a ton about them (and much more) when I mailed it back in. Looking back, the WPCE was like a kinder, gentler version of the CCIE practical exam. And I'm grateful that was my first foray into the world of IT certs.

 

My second certification didn't come until 7 years later. By that time I had worked my way up the IT food chain, from classroom instructor to desktop support, but I wanted to break into server administration. The manager of that department was open to the idea, but needed some proof that I had the aptitude. The company was willing to pay for the classes and the exams, so I began a months-long journey into the world of Novell networking.

 

At the time, I had my own ideas about how to do things (ah, life in your 20's when you are omniscient!). I decided I would take ALL the classes and once I had a complete overview of Novell, I'd start taking exams.

 

A year later, the classes were a distant dot in the rear view mirror of life but I still hadn't screwed up my courage to start taking the test. What I did have, however, was a lot more experience with servers (by then the desktop support was asked to do rotations in the helpdesk, where we administered almost everything anyway). In the end, I spend many, many nights after work and late into the night reviewing the class books and ended up taking the tests almost 18 months after the classes.

 

I ended up passing, but I also discovered the horrific nightmare landscape that is "adaptive exams" - tests that give you a medium level question on a topic and if you pass it, you get a harder question. This continues until you miss a question, at which point the level of difficulty drops down. And that pattern continues until you complete all the questions for that topic. On a multi-topic exam like the Certified Novell Engineer track, that means several categories of questions that come at you like a game of whack-a-mole where the mole's are armed and trying to whack you back. And the exam ends NOT when you answer all the questions, but when it is mathematically impossible to fail (or pass). Which led to a heart-stopping moment on question 46 (out of 90) when the test abruptly stopped and said "Please wait for results".

 

But it turns out I had passed.

 

Of course, I was prepared for this on the second test. Which is why the fact that it WASN'T adaptive caused yet more heart palpitations. On question 46 I waited for the message. Nothing. So I figured I had a few more questions to answer. Question 50 passed me by and I started to sweat. By question 60 I was in panic mode. At question 77 (out of 77), I was on the verge of tears.

 

But it turns out I passed that one, as well.

 

And 2 more exams later (where I knew to ASK the testing center what kind of test it would be before sitting down) I was the owner of a shiny new CNE (4.0, no less!).

 

And, as things often turn out, I changed jobs about 3 months later. It turns out that in addition to showing aptitude, the manager also needed an open req. My option was to wait for someone on the team to leave, or take a job which fell out of the sky. A local headhunter cold-called my house and the job he had was for a server administration job at a significant amount more than what I was making.

 

It also involved Windows servers.

 

By this time I'd been using Windows since it came for free on 12 5.25" floppies with Excel 1.0. For a large part of my career, "NT" was short for "Not There (yet)". But in 1998 when I switched jobs, NT 4.0 had been out for a while and proven itself a capable alternative.

 

Which is why, in 1999, I found myself NOT as chief engineer of the moon as it traveled through space but instead spending a few months of my evening hours studying for and taking the 5 exams that made up the MCSE along with the rest of my small team of server admins.

 

Getting our MCSE wasn't required, but the company once again offered to pay for both the class and the exam as a perk of the job (ah, those pre-bubble glory days!) so we all took advantage of it. This time I wasn't taking the test because I was told to, or to meet someone else's standard. I was doing it purely for me. It felt different, and not in a bad way.

 

By that point, taking tests had become old hat. I hadn't passed every single one, but my batting average was good enough that I was comfortable when I sat down and clicked "begin exam".

 

Ironically, it would be another 5 years before I needed to take a certification test.

 

In 2004, I was part of a company that was renewing their Cisco Gold Partner status, when the powers-that-be discovered they needed a few more certified employees. They asked for volunteers and I readily raised my hand, figuring this would be the same deal as the last time - night study for a few weeks, take a test, and everybody is happy.

 

It turns out that my company needed 5 certifications - CCNA (1 exam), MCSE (6 exams), MCSE+Messaging (add one more exam to the 6 for MCSE), Cisco Unity (1 exam), and Cisco Interactive Voice Response (1 exam). Oh, and they needed it by the end of the quarter. "I'm good," I told them, "but I'm not THAT good".

 

After a little digging, I discovered a unique option: Go away to a 3 week "boot camp" where they would cover all the MCSE material *and* administer the exams. Go straight from that boot camp to a 1 week boot camp for the CCNA. Then come home and finish up on my own.

 

It is a testament to my wife's strength of character that not only did she not kill me outright for the idea but supported the idea. And so off I went.

 

The weeks passed in a blur of training material, independent study, exams passed, exams failed, and the ticking of the clock. And then it was home and back to the "regular" work day, but with the added pressure of passing two more exams on my own. In the end, it was the IVR exam (of all things) that gave me the most trouble. After two stupendously failed attempts, I passed.

 

Looking back, I know it was all a very paper tiger-y thing to do. A lot of the material - like the MCSE - were things I knew well and used daily. But some (like the IVR) were technologies I had never used and never really intended to use. But that wasn't the point and I wasn't planning to go out and promote those certifications in any case.

 

But taking all those tests in such short order was also - and please don't judge me for this - fun. As much as some people experience test anxiety, but the rush of adrenaline and the sense of accomplishment at the end is hard to beat. In the end I found the whole experience rewarding.

 

And that, believe it or not, was the end of my testing adventure (well, if you don't count my SCP, but that's a post for another day) - at least it WAS it until this year when Destiny and I double-dog-dared each other to go on this certification marathon.

 

This time out, I think I'm able to merge the best of all those experiences. It is a lot of tests in a short period, but I'm only taking exams that prove the skills I've built up over my 30 year career. I'm not doing it to get a promotion or satisfy my boss or meet a deadline. It's all for me this time.

 

And it's also refreshingly simple. The idea that there is ONE correct answer to every question is a wonderful fiction, when compared to the average day of an IT professional.

 

So that's where things stand right now. Tell me where you are in your own certification journey in the comments below. Also let me know if there are topics or areas of the certification process that you want me to explore deeper in future posts.

This week's Actuator comes to you direct from a very quiet house because we have sent the spawn off to an overnight camp for two weeks. The hardest part for us will be doing all the chores that we've trained the kids to do over the years. It's like losing two employees!

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Avanti Breach – Does This Signify IoT Attacks Have Become Mainstream?

Yes. And, since you had to ask, I suggest you pay attention to all the other IoT hacks in the news.

 

Hacker steals $30M worth of Ethereum by abusing Parity wallet flaw

Good thing these banks have insurance to protect against loss… What's that you say? Oh.

 

A Hacker's Dream: American Password Reuse Runs Rampant

It’s time to get rid of passwords, and we all know it. Unfortunately, security isn’t a priority yet, so systems will continue to rely on antiquated authentication methods.

 

IoT Security Cameras Have a Major Security Flaw

Like I said, security is not a priority for many.

 

How Microsoft brought SQL Server to Linux

Somewhere, Leon Adato is weeping.

 

New IBM Mainframe Encrypts All the Things

This sounds wonderful until you remember that most breaches happen because someone left their workstation unlocked, or a USB key on a bus, etc.

 

Sweden leaked every car owners' details last year, then tried to hush it up

There is no security breach that cannot be made worse by trying to cover up the details of your mistakes.

 

This arrived last week and I wanted to say THANK YOU to Satya and Microsoft for making the best data platform on the planet:

Filter Blog

By date: By tag: