Skip navigation

Compliance, as it applies to IT departments, involves following rules and regulations that are meant to protect sensitive data of all types. It can govern everything from IT decision-making and purchasing, to configurations, and the policies and procedures a company must create and enforce to uphold this important task.

 

Rightfully so, many businesses are taking the obligation of compliance very seriously. After all, there is a lot at stake when fines and penalties can be levied against you (among other legal repercussions) for noncompliance.

 

As an IT pro, it’s important to know you what you’re up against. Answer these questions from our recent Compliance IQ Quiz – or verify your IQ from your earlier exam – to see how your knowledge stacks up when it comes to IT compliance.

 

Despite InfoSec folklore, the actors most often involved in a breach of sensitive information are coming from outside your company. Unfortunately, understanding the source of these threats is only half the battle when it comes to maintaining IT security and compliance.

 

1.) Which of the following three types of cyberattacks can be classified as an external threat?

 

A) Technical attacks

B) Phishing attacks

C) Physical attacks

D) All of the above

 

HINT: Watch our "IT Security Threats and Risk Management” video (15:47 - 34:35). Click here.

 

Answer: D) All of the above

 

It is true that most threats to your data and compliance initiatives come from beyond the four walls of your organization, but that doesn’t mean your fellow employees can't somehow be involved.

 

2.) Which of the following exploits is classified as "a form of social engineering in which a message, typically an email, with a malicious attachment or link is sent to a victim with the intent of tricking the recipient to open an attachment."

 

A) Pre-texting

B) Baiting

C) Phishing

D) Elicitation

 

HINT: Would you know if your network was breached? Read this article on solarwinds.com.

 

Answer: C) Phishing

 

If your business interacts with sensitive data that falls under the protection of HIPAA, PCI, NCUA, SOX, GLBA, or other frameworks, then compliance should be on your radar.

 

3.) Poll: Which of the following industries does your business serve? (Select all that apply)

 

A) Financial services

B) Healthcare

C) Technology

D) Federal

E) Education

F) Other

 

See who participated in the quiz in this chart, below:

solarwinds-compliance-iq-quiz.png

 

No locale, industry, or organization is bulletproof when it comes to the compromise of data, even with a multitude of compliance frameworks governing the methods used to prevent unlawful use or disclosure of sensitive data.

 

4.) In the past year, which industry experienced the highest number of breaches of sensitive information? For reference, we have highlighted the key compliance frameworks that guide these industries.

 

A) Financial services - PCI DSS, NCUA, SOX GLBA, and more

B) Healthcare - HIPAA

C) Technology - ISO, COBIT, and more

D) Federal - FISMA, NERC CIP, GPG 13, and more

E) Education - FERPA F) Other

 

HINT: Check out Verizon’s 2016 Data Breach Investigation Report. Click here.

 

Answer: A) Financial services

 

If your business must comply with a major IT regulatory framework or scheme, you may be subject to serious penalties for noncompliance.

 

5.) Not adhering to a compliance program can have severe consequences, especially when breaches are involved. Which of the following can result from noncompliance?

 

A) Withdrawal or suspension of a business-critical service

B) Externally defined remediation programs

C) Fines

D) Criminal liability

E) All of the above

 

HINT: Read this Geek Speak post titled The Cost of Noncompliance. Think big picture and across all frameworks.

 

Answer: E) IT compliance violations are punishable by all of these means, and more.

 

The cost of a breach goes well beyond the fines and penalties levied by enforcement agencies. It also includes the cost of detecting the root cause of a breach, remediating it, and notifying those affected. There are also legal expenditures, business-related expenses, and loss of revenue by damaged brand reputation to take into account, as well.

 

6.) True or False: The price that businesses pay for sensitive data breaches is on the rise globally.

 

HINT: You do the math!

 

Answer: True. According to the Ponemon Institute, the cost associated with a data breach has risen year over year to a current $4 million.

 

Healthcare is increasingly targeted by cyberattacks, including a spree of high-profile breaches and increased enforcement efforts from the OCR over the past few years.

 

7.) What type of data are hackers after if your business is in the healthcare industry?

 

A) CD or CHD

B) ePHI

C) PII

D) IP

 

HINT: Read this post: Top 3 Reasons Why Compliance Audits Are Required.

 

Answer: B) ePHI - Electronic Protected Health Information

 

Other Definitions: CD/CHD - Cardholder Data; ePHI - Electronic Protected Health Information; PII - Personally Identifiable Information; IP - Intellectual Property

 

Despite the higher black market value of healthcare data, 2016 saw a greater volume of compromised PCI data. This makes it all the more important to understand this framework.

 

8.) Which response most accurately describes PCI DSS compliance?

 

A) The organization can guarantee that credit card data will never be lost

B) The organization has followed the rules set forth in the Payment Card Industry Data Security Standards, and can offer proof in the form of documentation

C) The organization is not liable if credit card data is lost or stolen

D) The organization does not store PAN or CVV data under any circumstances

 

HINT: Check out this article: Best Practices for Compliance.

 

Answer: B) The organization has followed the rules set forth in the Payment Card Industry Data Security Standards, and can offer proof in the form of documentation.

 

According to the Verizon 2016 Data Breach Investigation Report, 89% of breaches had a financial or espionage motive. With a long history of unified compliance efforts, the banking industry certainly takes this seriously, and so should you.

 

9.) True or False: The Federal Financial Institute of Examiners Council (FFIEC) is empowered to prescribe uniform principles, standards, and report forms to promote uniformity in the supervision of financial institutions.

 

HINT: See footnote #3 from the The Cost of Noncompliance.

 

Answer: True.

 

Though your aim as an IT pro may be to get compliance auditors off your back, the cybersecurity threat landscape is constantly changing.

 

10.)  True or False: If your organization passed its first compliance audit, that means its network is secure.

 

HINT: Watch our Becoming and Staying Compliant video (10:06- 11:09). Click here.

 

Answer: False. Continuous IT compliance is key to meeting and maintaining regulatory requirements long-term.

 

Any feedback on this quiz or burning questions come as a result? Share a comment, we’d love hear your thoughts.

Today’s posting steps out of my core skills a bit. I’ve always been an infrastructure guy. I build systems. I help my customers with storage, infrastructure, networking, and all sorts of solutions to solve problems in the data center, cloud, etc. I love this “Puzzle” world. However, there’s an entire category of IT that I’ve assisted for decades, but never really have I been a part of. Developers, as far as I’m concerned are magicians. How do they start with an idea and simply by using “Code” that they generate, in a framework, language, etc.? I simply don’t know. What I do know, though, is that the applications that they’re building are undergoing systemic change too. The difference between where we’ve been, and where we’re going is due to the need for speed, responsiveness, and agility in the modifications of the code.

 

In the traditional world of monolithic apps, these wizards needed to put features into applications by adjunct applications, or learn the code on which some “Off the Shelf” software was written in order to make any changes. Moving forward, the Microservices model, that is: code fragments each purpose built to either add functionality, or streamline existing function.

 

Companies like Amazon’s Lambda and Iron.IO, Microsoft Azure, (and soon to be Google) have upped the ante. While I feel that the term “Serverless” is an inaccuracy, as workloads will always need somewhere to run. There must be some form of compute element, but by abstracting that even further from the hardware (albeit virtual hoardware), we are relying less on where or what these workloads depend on. Particularly when you’re a developer, worrying about infrastructure is something you simply do not want to do.

 

Let’s start with a conceptual definition:  According to Wikipedia, “Also known as Function as a Service (FaaS) is a cloud computing code execution model in which the cloud provider fully manages starting and stopping virtual machines as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machines, per hour.” Really, this is not as “Jargon-y” as it sounds. You, the developer, are relying on an amount of processor/memory/storage, but you don’t have any reliance on persistence, or particular location. I feel it’s designed to be a temporary sandbox for your code.

 

In many ways, it’s an approach alternate to traditional IT infrastructure modes of requesting virtual machine(s) from your team, waiting for them, and then if you’ve made some code errors, once again waiting for those machines to be re-imaged. Instead, you load up a temporary amount of horsepower, load it with a copy of your code, then destroy it.

 

In the DevOps world, the beauty of this kind of ephemeral environment is very beneficial. Functionally, the building of code in a DevOps world means that your application is made up of many small pieces of code, rather than a single monolithic application. We have adopted the term “MicroServices” to apply to these fragments of code. Agility and flexibility in these small pieces of code is critical. In fact, the whole concept of DevOps is all about agility. In DevOps, as opposed to traditional code development, rollouts of code changes, updates, patching, etc. can take place with a level of immediacy, whereas full application type rollouts require massive planning. In this case, a rollout could potentially take place instantaneously, as these pieces tend to be essentially tiny. Potentially, code updates can take place and/or be rolled back in minutes.

 

While it is completely certain that DevOps, and the agility that it portends will change the dynamic in Information technology, particularly in companies that rely on home-grown code, what is truly uncertain is whether the concepts of serverless computing will ever grow beyond the Test/Dev world and enter into production environments. I find it difficult, and maybe due to my own lack of coding skills, to envision deploying workloads in a somewhat piecemeal approach. While putting pieces together inside a single workload feels to me far more approachable.

A Never Ending IT Journey around Optimizing, Automating and Reporting on Your Virtual Data Center

1609_VM_Ebook_photo_1.jpg

Reporting

IT reporting at its best is pure art backed by pure science and logic. It is storytelling with charts, figures, and infographics. The intended audience should be able to grasp key information quickly. In other words, keep it stupid simple. Those of you following this series and my 2016 IT resolutions know that I’ve been beating the “keep it stupid simple” theme pretty hard. This is because endless decision-making across complex systems can lead to second-guessing, and we don’t want that. Successful reporting takes the guesswork out of the equation by framing the problem and solution in a simple, easily consumable way.

 

The most important aspect of reporting is knowing your target audience and creating the report just for them. Next, define the decision that needs to be made. Make the report pivot on that focal point, because a decision will be made based on your report. Finally, construct the reporting process in a way that will be consistent and repeatable.

  • excerpted from Skillz To Master Your Virtual Universe SOAR Framework

 

Reporting in the virtual data center details the journey of the virtualization professional in the virtual data center. The story will start with details of virtual data center and key performance indicators. It will evolve into a journey of how to get what is needed to expand the delivery capabilities of the virtual data center. With agility, availability and scalability at the heart of the virtual data center show, reporting is the justification for optimization and automation success.

 

Audience and context matters

Reporting is an IT skill that provides the necessary context for decision-makers to make their singular decision. The key aspects of reporting are the audience and the context. You need to know who the audience is and that will guide an IT pro on the context i.e. the data, the data analysis and the data visualization required in the report. To adeptly report, an IT professional needs to answer the following questions: for whom is the report intended and what things need to be included for a decision?

 

Reporting molds data and events into a summary highlighting key truths for decision makers to make quick, sound decisions. It is neither glamorous nor adrenaline-pumping but it shows IT mastery in its last, evolved form - a means to an end

 

This post is a shortened version of the eventual eBook chapter. Stay tuned for the expanded version in the eBook.

Once upon a time, a small but growing geeky community found a way to instantly communicate with each other using computers. Internet Relay Chat (IRC) and later iterations like ICQ, were the baby steps towards today’s social media platforms, connecting like-minded people into a stream of consciousness. It didn’t take long before some smart developer types coded some response bots into this chat. IRC chat bots provided an interface for finding out answers without a human needing to be at the other end.

 

All type of bots

Today, some social media platforms have embraced bots more than others. Slack’s library of bots is extensive, ranging from witty responses to useful business enablers that highlight information or action requests. Twitter’s bots tend to fall more into the humor category or the downright annoying auto follow/unfollow bots. At the useful end of the scale is Dear Assistant, which is a search-style bot for answering your questions. My personal favorite though is the bot that tweets in real-time from different passengers & crew on the anniversary of the Titanic voyage each year.

 

You can see the difference between bots for automating the dissemination of information, bots that provide a reactive response to input, and bots that connect to and use another service before delivering their response. Our acceptance of this method of interaction and communication is growing, though I wouldn’t say it’s totally commonplace in the consumer market just yet. People still tend to prefer to chat (even online) with other people instead of bots, when looking for an answer to a problem that they can’t already find with a web search.

 

Along comes Voice

The next step in this evolution was voice controlled bots as speech recognition technology improved and became commonplace. Siri, Cortana, Alexa, Google Home … all provide that ‘speak to me’ style bot interaction in an affordable wrapper. It doesn’t feel like justice to call these ‘assistants’ a bot though, even with an underlying ‘accept command and respond’ service. Today’s voice controlled assistants must meld the complexity of speech recognition with phrase analysis, to deliver a quality answer to a question that could be phrased many different ways. Adoption of voice control varies, with some people totally hooked on speaking commands and requests into their devices, while some reserving it for moments of fun and beat-boxing.

 

If bots have been around for so long, why are they only becoming more mainstream now?

 

I think bots are an example of a technology that was before it’s time. It took more widespread adoption and acceptance of social media before we reached the critical mass where enough people were on those platforms to make bots worth looking at. Social media also provided an easy input channel to business and brands, who now have a problem that bots can solve – the automation of inquiries without the cost of human head count 24 x 7. We’ve also created a problem because of the large amount of cost-effective Cloud services. It’s not uncommon for people to use different Internet based services for different tasks, leading to a need for connectivity. Systems like If This Then That and Microsoft Flow are helping to solve that problem. Bots also help connect our services to bring notifications or input points into a more centralized location, so we don’t have to bounce around as much with information siloed in individual services. That’s important in a time-poor, information overwhelmed society.

 

The rise of the bots reminds me of the acceptance of instant messaging and presence awareness. Back in the old days, Lotus Sametime had the capability to show if someone was online from within an email and provide instant individual and group messaging. While it was an Enterprise tool (no Sametime as a Service, free or subscription based), we still had a hard time in the late 90s convincing people of the business benefit. Surely they’ll just all gossip with each other and not get any work done? You have less of a challenge these days convincing a business about Skype or Skype for Business when portions of the workforce (especially the younger employees) have Facebook, Slack and Twitter as part of their lives.  In a world where some prefer text-based chat to phone calls, instant messaging in the workplace is not that big of an adoption leap. I think bots now have a similar, easier path to adoption, though they still have a way to go.

 

Why is this important?

Over the next few weeks I’m going to delve more into this emerging technology, along with machine learning and artificial intelligence. I’m interested to hear if you think it’s all hype, or if we need to embrace as the next big, life changing thing. As much of a geek as I am, I’ve seen concepts fail when they are a technology looking for a problem to solve, rather than the other way around. If bots, AI and machine learning are the future, what will that look like for us as consumers and as IT Professionals?

IMG_2055.JPG

 

Automation is the future. Automation is coming. It will eliminate all of our jobs and make us obsolete. Or at least that is the story which is often being told.  Isn’t it true though?

I mean, who remembers these vending machines of the future which were set to eliminate the need for cooks, chefs, cafeterias!   That remarkably sounds just like the same tools we’re using on a regular basis built, designed and streamlined to make us all unnecessary to the business! And then Profit, right?

 

Well, if Automation isn’t intended to make eliminate us, what IS it for?  Some might say that automation is to make the things we’re already doing today easier and make us better at doing our jobs.  That can be true to a point.  Some might also say that automation is taking things that we cannot do today and making it possible so we can be better at doing our jobs. That can also be true to a point.

 

How many of you recall in over the course of your networking operations and management lives, Long before Wireshark and ethereal, having to bring online or hire in a company to help troubleshoot a problem with a “sniffer laptop”.   It wasn’t anything special, and something we all likely take for granted today, yet it was a specialized piece of equipment with specialized software which enabled us to gain insight into our environment, to dig into the weeds to see what is going on!

Screen Shot 2016-10-26 at 10.12.59 PM.png

 

These days though with Network Monitoring tools, SNMP, NetFlow, Sflow, Syslog servers and real time data telemetry from our network devices it is not only something which is attainable, it’s downright expected that we should have all of this information and visibility.

 

With the exception of a specialized ‘sniffer’ engineer, I don’t see that automation having eliminated people, It only made us all more valuable and yet the expectation of what we’re able to derive out of the network has only grown.   This kind of data has made us more intelligent, but it hasn’t exactly made us smarter. The ability to read the Rosetta stone of networking information and interpret it is what has separated the engineers from the administrators in some organizations.   Often times the use of tools has been the key to taking that data and not only making it readable, but also making it actionable. 

 

Automation can rear its beautiful or ugly head in many different incarnations in the business, from making deployment of workstations or servers easier than we had been in the past with software suites, tools or scripting.  To, taking a dated analogy eliminate the need for manual switch board operators at Telcos by replacing them with Switching Stations which automatically transfer calls based upon the characteristics of dialing.  But contrasting the making something we were already doing today and making it better, to the something we were already doing with people and eliminating them.   Until this latest generation and thanks to technology and automation credit companies are able to generate ‘one-time CC #’ which can also be tied back to a very specific amount of money to withdraw from your credit card account. A capability which was not only unheard of in the past, but it would have been fundamentally impossible to implement let alone police without our current generation of data, analytics and automation abilities.

 

 

As this era of explosive knowledge growth, big data analytics and automation continues, what have you been seeing as the big differentiators to what kind of automation is making your job easier or more possible, which aspects of automation are creating capability which fundamentally didn’t exist before, and which parts of it are eliminating partially or wholly the way we do things today?

To get started with securing your network, you don’t need to begin with a multi-million dollar project with multiple penetration tests, massive network audits, wide-spread operating systems upgrades, and the installation of eye-wateringly expensive security appliances. Those are all great things to do if you have the staff and budget, but just starting with some entry-level basics will provide a huge step forward in securing your network from today’s more common vulnerabilities. These ten practices are relatively easy and quick ways to create the foundation of a robust security program.

 

1. Patching

 

Keeping operating system patches up to date may seem like a no-brainer, but it still seems to fall by the wayside even in large organizations. I use the term “patching” very loosely here because I want to highlight the importance of updating all operating systems, not just Windows.

 

It’s important to set a regular Windows patch schedule and automate it using whatever tools you have available. Whether this is weekly or monthly, the key is that it’s regular and systematic.

 

Also remember all the other operating systems running on the network. This means keeping track of and updating the software running on network devices such as routers, switches, and firewalls, and also Linux-based operating systems commonly used for many servers and specialized end-user use cases. 

 

 

2. Endpoint Virus Protection

 

Not long ago, endpoint virus protection was a bear to run because of how resource intensive it was on the local computer. This is not at all the case anymore, and with how frequently malware sneaks into networks via email and random web-browsing, endpoint protection is an absolutely necessary piece of any meaningful security program.

 

 

3. Policy of Least Privilege

 

Keep in mind that attack vectors aren’t all external to your network. It’s important to keep things secure internally as well. Assigning end-users only the privileges they need to perform their job function is a simple way to provide another layer of protection against malicious or even accidental deletion of files, copying or sending unauthorized data, or accessing prohibited resources.

 

 

4. Centralized Authentication

 

Using individual or shared passwords to access company resources is a recipe for a security breach. For example, rather than use a shared pre-shared-key for company wireless, use 802.1x authentication and a centralized database of users such as Windows Active Directory in order to lock down connectivity and restrict what resources users can access. This can be applied to almost any resource including computers, file shares, and even building access.

 

 

5. Monitoring and Logging

 

Monitoring a network and keeping extensive logs can be very expensive simply because of the cost associated with the hardware and licensing needed to audit and store large amounts of data. However, this may be one area in which it would be a good idea to explore some software options. Most network devices have very few built-in tools for monitoring and logging, so even a basic software solution is still a huge step forward. This is very important for creating network baselines in order to determine anomalous behavior as well as traffic trends needed to right-size network designs. Additionally, having even very basic logs are priceless when investigating a security breach or performing service-desk root cause analysis.

 

 

6. End-user training

 

The only way to completely secure a network is to turn off all the computers and send them to the bottom of the ocean. Until management approves such a policy, end-users will be clicking on links they shouldn’t be clicking on and grabbing files from the file share just before their friendly exit interview. End-user training is a practice in changing culture and security awareness. This is a difficult task for sure, but it’s an inexpensive and non-technical way to strengthen the security posture of an organization. End-user training should include instructions on what red flags to look for in suspicious email and how to report suspicious activity. It should also include training to prevent password sharing and how to use email properly.

 

 

7. Perimeter Security

 

The perimeter of the network is where the local area network meets the public internet, but today that line is very blurred. A shift toward a remote workforce, the use of cloud services, and the movement away from private circuits means that the public internet is almost an extension of the LAN. Traditionally, perimeter firewalls were used to lock down the network edge and stop any malicious attack from the outside. Today, so much necessary business traffic ingresses and egresses the perimeter firewall that it’s important to keep firewall rules up-to-date and maintain a very keen awareness of what services run on the network. For example, a very simple modification for egress filtering is to restrict outbound traffic to any destination on port 25 (Simple Mail Transfer Protocol) to only the email server. This simple firewall change prevents any infected computer from generating outbound mail traffic possibly marking the organization as a spam originator.

 

 

8. Enterprise IoT

 

The Internet of Things may certainly be a buzzword in some peoples’ minds, but many companies have been dealing with a multitude of small, insecure, IP-enabled devices for years. Manufacturing companies often use hand-held barcode scanners, medical facilities use IP-based tracking devices for medical equipment, and large office campuses use IP-based card access readers for doors. These devices aren’t always very secure sometimes utilizing port 80 (unencrypted HTTP) for data transmission. This can be a big hole in an organization’s network security posture. Some organizations have the money and staff to implement custom management systems for these devices, but an entry-level approach to get started could be to simply place all like devices in a VLAN that has very restricted access. Applying the policy of least privilege to a network’s IoT devices is a great first step toward securing the current influx of IP-enabled everything.

 

 

9. Personal Devices

 

End-users’ personal mobile devices, including smartphones and tablets, often outnumber corporate devices on many enterprise networks. It’s important to have a strategy to give folks a pleasant experience using their devices while keeping in mind that these are normally unmanaged and unknown network entities. To start, simply require by policy that all personal smartphones must use the guest wireless. This may ruffle some executive feathers, but really there’s almost no reason for a tiny smartphone to access company resources while on the LAN. Of course there are exceptions, but starting with this type of policy is at least a good company conversation starter to move toward a decent end-user experience without compromising network security.

 

 

 

10. Physical security

 

It may go without saying that a company’s servers, network devices, and other sensitive infrastructure equipment should be behind locked doors, but often this is not the case. Especially in smaller organizations where there may be a culture of trust, entire server rooms are unlocked and accessible to anyone walking by. Physical security can take the form of biometric scanners to enter secure data centers with cameras peering down from overhead, but a simple first step is to lock all the network closets and server room doors. If keys are unaccounted for, locks should be changed. Additionally, disabling network ports not assigned to a workstation, printer, or other verified network device is a good way to prevent guests from plugging in their non-corporate devices into the corporate network.

 

You don’t need to mortgage the farm to start making great strides in your organization’s security posture. These relatively simple and entry-level tasks will prevent most of the attack vectors we see today. Start with the basics to lay the foundation for a strong network security posture.

Pop quiz: When was the first instance of an Internet of Things thing?

 

If you guessed 1982, you’re right! In 1982, a Coca-Cola machine at Carnegie Mellon University was modified to connect to the internet and report its inventory and whether newly loaded drinks were cold. Of course, this is up for debate as some might even claim the first internet-connected computer was an IoT thing.

 

These days, IoT things are decidedly more numerous, and overall, IoT is much more complex. In fact, according to Gartner, in just the last two years, the number of IoT devices has surged to 6.4 billion, and by 2020, device population will reach 20.8 billion. With these eye-popping figures, we’re very curious to know how your job has been affected by IoT.

 

Does your organization have an IoT strategy?

 

How many IoT devices do you manage?

 

How have these devices affected your network and the ability to monitor it and/or capacity plan?

 

What are the security challenges you face as a result of these devices?

 

Tell us about your “IoT reality” by answering these questions (and maybe provide an anecdote or two) by November 9 and we’ll give you 250 THWACK points—and maybe a Coke circa 1982 (just kidding)—in return. Please be sure to also let us know a little about your company: industry, size, location (country).

I'm in Seattle this week at the PASS Summit. If you don't know PASS, check out sqlpass.org for more information. This will be my 13th consecutive Summit and I'm as excited as if it was my first. I am looking forward to seeing my #sqlfamily again.

 

Anyway, here's a bunch of stuff I found on the Intertubz in the past week that you might find interesting, enjoy!

 

DDoS attack against DNS provider knocks major sites offline

Yet another reminder that the internet is fragile, and you can't rely on any service provider to be up 100% of the time.

 

10 Modern Software Over-Engineering Mistakes

A bit long but worth the read. I am certain you will identify (painfully at times) with at least 8 or more of the items on this list.

 

What will AI make possible that's impossible today?

Also a bit long, but for someone that enjoys history lessons I found this worth sharing.

 

Go serverless for the enterprise with Microsoft Azure Functions

One of the sessions from Microsoft Ignite I felt worth sharing here. I am intrigued by the 'serverless' future that appears to be on the horizon. I am also intrigued as to how this future will further enable DDoS attacks because basic security seems to be hard for most.

 

Bill Belichick is done using the NFL’s Microsoft Surface tablet he hates so much

On the bright side, the announcers keep calling it an iPad, so there's that.

 

Here's how I handle online abuse

Wonderful post from Troy Hunt on how he handles online abuse. For those of us that put ourselves out there we often get negative feedback from the cesspool of misery known as the internet and Troy shares his techniques.

 

Michael Dell Outlines Framework for IT Dominance

"Work is not a location. You don't go to work, you do work." As good as that line is, they forgot the part about work also being a never ending day.

 

Microsoft allows Brazil to inspect its source code for ‘back doors’

I've never heard of this program before, but my first thought is how would anyone know if Microsoft was showing them everything?

 

I managed to visit some landmarks while in Barcelona last week, here's Legodaughter in front of Sagrada Familia:

lego - 1.jpg

Emails is the center of life for almost every business in this world. When email is down businesses cannot communicate. There is loss of productivity which could lead to dollars lost, which in the end is not good.

Daily monitoring of Exchange involves many aspects of the environment and the surrounding infrastructure.  Simply turning on monitoring will not get you very far. First question you should ask yourself is “What do I need to monitor?” Not knowing what to look out for could inundate you with alerts which is not going to be helpful for you.

One of the first places to look at when troubleshooting mail slowness or other email issues is the network. Therefore, it is a good idea to monitor some basic network counters on the Exchange Servers. These counters will help guide you to determine where the root cause of the issue is.

 

Network Counters

The following tables displays acceptable thresholds and information about common network counters.

 

Counter

Description

Threshold

Network Interface(*)\Packets Outbound Errors

Indicates the number of outbound packets that couldn't be transmitted because of errors.

Should be 0 at all times.

TCPv6\Connection Failures

Shows the number of TCP connections for which the current state is either ESTABLISHED or CLOSE-WAIT. The number of TCP connections that can be established is constrained by the size of the nonpaged pool. When the nonpaged pool is depleted, no new connections can be established.

Not applicable

TCPv4\Connections Reset

Shows the number of times TCP connections have made a direct transition to the CLOSED state from either the ESTABLISHED state or the CLOSE-WAIT state.

An increasing number of resets or a consistently increasing rate of resets can indicate a bandwidth shortage.

TCPv6\Connections Reset

Shows the number of times TCP connections have made a direct transition to the CLOSED state from either the ESTABLISHED state or the CLOSE-WAIT state.

An increasing number of resets or a consistently increasing rate of resets can indicate a bandwidth shortage.

 

 


Monitoring Beyond the Exchange Environment


When your monitoring exchange not only are you monitoring for performance but you also want to monitor outside factors such as network, active directory, and any external connections such as mobile device management. All these external factors will affect the health of your Exchange environment.

In order to run Exchange, you need a network, yes routers and switches can impact exchange. As the exchange admin you don’t need to be aware of every single network event but a simple alert of a network outage or blip can be helpful. Sometimes all it takes is a slight blip in the network and it could have could affect your Exchange DAG by causing databases to fail over.

If you are not responsible for the network, then I would suggest you coordinate with your network team on what notifications you should be made aware in terms of network outages. Some key items to be informed or notified of are:

  • Planned outages between datacenters
  • Planned outages for network equipment
  • Network equipment upgrades and or changes that would affect the subnet your exchange servers reside on
  • Unplanned outages of network equipment and between datacenters
  • If your servers are virtualized, you should be informed of any host changes and/or virtual switch changes
  • Planned or unplanned DNS server changes because DNS issues can be a real nightmare

Preventing Bigger Headaches

Exchange Monitoring is a headache and can be time consuming but if you know what you are looking for and have the right tools in hand it is not so bad.  If the Exchange DAG is properly designed a network blip or outage should not take down your email for you company, this is the whole point of having an Exchange DAG( high availability design). What you may get is a help desk calls when users see that their outlook has disconnected briefly. Being informed of potential network outages can help you prepare in advance if you need to manually switch active copies of databases or when you need to do mailbox migrations. A network that is congested or having outages can cause mailbox migrations to fail, cause outlook clients to disconnect and even impact the speed of email delivery. Knowing ahead of time allows you to be prepared and have less headaches.

 


Forget about writing a letter to your congressman – today citizens use the web, email and social media to make their voices heard on the state, local and federal levels.

 

Much of this participation is due to the ubiquity of mobile devices. People can do just about everything with a smartphone or tablet and they expect their interactions with the government to be just as easy.

 

Unfortunately, according to a January 2015 report by the American Customer Satisfaction Index, citizen satisfaction with federal government services continued to decline in 2014. This, despite Cross-Agency Priority Goals that require federal agencies to “utilize technology to improve the customer experience.”

 

IT pros need to design services that allow users to easily access information and interact with their governments using any type of device. Then, they must monitor these services to ensure they continue to deliver optimal experiences.

 

Those who wish to avoid the ire of the citizenry would do well to add automated end-user monitoring to their IT toolkit. End-user monitoring allows agency IT managers to continuously observe the user experience without having to manually check to see if a website or portal is functioning properly. It can help ensure that applications and sites remain problem-free-- and enhance a government’s relationship with its citizens.

 

There are three types of end-user monitoring solutions IT professionals can use to ensure their services are running at peak performance.

 

First, there is web performance monitoring, which can proactively identify slow or non-performing websites that could hamper the user experience. Automated web performance monitoring tools can also report load-times of page elements so that administrators can adjust and fix pages accordingly.

 

Synthetic end-user monitoring (SEUM) allows IT administrators to run simulated tests on different scenarios to anticipate the outcome of certain events. For example, in the days leading up to an election or critical vote on the Hill, agency IT professionals may wish to test certain applications to ensure they can handle spikes in traffic. Depending on the results, managers can make adjustments to handle the influx.

 

Likewise, SEUM allows for testing of beta applications or sites, so managers can gauge the positive or negative aspects of the user experience before the services go live.

 

Finally, real-time end-user monitoring effectively complements its synthetic partner. It is a passive monitoring process that gathers actual performance data as end users are visiting and interacting with the web application in real time, and it will alert administrators to any sort of anomaly.

 

Using these monitoring techniques, IT teams can address user experience issues from certain locations – helping to ascertain why a response rate from a user in Washington, D.C., might be dramatically different from one in Austin, Texas.

 

Today, governments are trying to become more agile and responsive and are committed to innovation. They’re also looking for ways to better service their customers. The potent combination of synthetic, real-time and web performance monitoring can help them achieve all of these goals by greatly enhancing end-user satisfaction and overall citizen engagement.

 

Find the full article on Government Computer News.

Without a doubt, the biggest trend in IT storage over the past year, and moving forward is the concept of Software Defined Storage (SDS). It’s more than just a buzzword in the industry, but, as I’ve posted before, a true definition has yet to be achieved. I’ve written previously about just the same thought. Here’s what I wrote.

 

Ultimately, I talked about different brands with different approaches and definitions. So, at this point, I’m not going to rehash the details. But at a high level, the value as I see it, has to do with the divesting of the hardware layer from the management plane. In the view I have, the capacities of leveraging the horsepower of commodity hardware, in reference builds, plus a management layer optimized toward that hardware build grants the users/IT organization the costs come down, and potentially, the abilities to customize the hardware choices for the use-case. Typically your choices revolve around Read/Write IOPS, Disc Redundancy, Tiering, Compression, Deduplication, Number of paths to disc, failover, and of course, with the use of X86 architecture, the amount of RAM, and speed of processors in the servers. To compare these to traditional monolithic storage platforms makes for a compelling case.

 

However, there are other approaches. I’ve had conversations with customers who only want to buy a “Premixed/Premeasured” solution. And, while that doesn’t lock out SDS models such as that one above, it does change the game a bit. Toward that end, many storage companies will align with a specific server, and disc model. They’ll build architectures very tightly bound around a hardware stack, even though they’re relying on commodity hardware, and allow the customers to purchase the storage much in the same as more traditional models. They often take it a step further and put a custom bezel on the front of the device. So it may be Dell behind, but it’s “Someone’s Storage” company in the front. After all, the magic all takes place at the software layer, whatever makes the solution unique… so why not?

 

Another category that I feel is truly trending in terms of storage, is really a recent category in backup, dubbed CDM, or Copy Data Management. Typically, these are smaller bricks of storage that act as gateway type devices, holding onto some backup data, but also pointing to the real target, as defined by the lifecycle policy on the data. There are a number of players here. I am thinking specifically of Rubrik, Cohesity, Actifio, and others. As these devices are built on storage bricks, but made functional simply due to superior software, I would also consider them to be valid considerations as Software Defined Storage.

 

Backup and Disaster Recovery are key pain points in the management of an infrastructure. Traditional methods consisted of some level of scripted software moving your data for backup into a tape mechanism (maybe robotic), which would then require quite often manual filing, and then the rotation of tapes to places like Iron Mountain. Restores have been tricky, time spent awaiting restore, and the potential corruption of files upon those tapes has been reasonably consistent. With tools like these, as well as other styles of backup including cloud service providers and even public cloud environments have made things far more viable. These CDM solutions take so much of the leg work out of the process, and as well, enable quite possibly zero Recovery Point and Recovery Time objectives, regardless of where the current file is located, and by that I mean, the CDM device, local storage, or even on some cloud vendor’s storage. It shouldn’t matter, as long as the metadata points to that file.

 

I have been very interested in changes in this space for quite some time, as these are key process changes pushing upward into the data center. I’d be curious to hear your responses and ideas as well. 

PCI DSS 3.1 expires October 31st this year. But don’t panic. If you don’t have a migration plan for 3.2, yet, you have until Feb 1, 2018 before the requirements become mandatory. For most merchants, the changes are not onerous. If you are a service provider, however, there are more substantial changes, some of which are already in effect. In this post we focus on the requirements for merchants, and present a quick overview of the required changes so that you can identify any gaps you may still need to remediate. 

 

The main changes in PCI DSS 3.2 are around authentication, encryption, and other minor clarifications. We will primarily discuss authentication and encryption in this article.

 

Authentication

The PCI council review of data breaches over the years has confirmed that current authentication methods are not sufficient to mitigate the risk of cardholder data breaches. Even though cardholder data is maintained on specific network segments, it is nearly impossible to prevent a data breach if the authentication mechanisms are vulnerable as well. Specifically, the reliance on passwords, challenge questions, etc. only, has been proven to be a weakness in the overall security of cardholder networks. 

 

The new 3.2 requirements now state that multi-factor authentication (MFA) is required for all individual, administrative, non-console, or remote access to the cardholder data environment.[1] The new standard is clear. Multi-factor[2] means at least two of the following:

 

  • Something you know, such as a password or passphrase
  • Something you have, such as a token device or smart card
  • Something you are, such as a biometric

 

Multi-factor does not mean a password and a set of challenge questions, or two passwords.

 

Implementing such multifactor authentication takes time and planning as there are almost 200 different types of multifactor solutions on the market.

Multifactor authentication is also complex because:

  1. It likely needs to integrate with your single sign-on solution
  2. Not all IT systems can support the same types of MFA, especially cloud solutions
  3. MFA resets are more complex (especially if a dongle is lost)
  4. Most MFA solutions require rollout and management consoles on top of your built-in authentication

 

Encryption

In recent years, SSL, the workhorse for securing e-commerce and credit card transactions, has been through the ringer. From shellshock, heartbleed, and poodle to the Man-in-the-Middle AES CBC padding attack reported this May, SSL, openSSL, and all the derivative implementations that have branched from openSSL have been experiencing one high severity vulnerability after another. [SIDEBAR: A list of all vulnerabilities in openSSL can be found here: List of OpenSSL Vulnerabilities, other SSL vulnerabilities can be found on the vendors websites or at the National Vulnerability Database.] Of particular concern are SSL vulnerabilities in Point of Sale solutions and other embedded devices as those systems may be harder to upgrade and patch than general-purpose computers. In response, the PCI council has issued guidance on the use of SSL.[3]

 

The simplest approach to achieve PCI DSS 3.2 compliance is not to use SSL, or early versions of TLS, period. [SIDEBAR TLS  - Transport Layer Security is the name of the IETF[4] RFC that was the follow on to SSL]. In fact, any version of TLS prior to 1.2 and all versions of SSL 1.0-3.0 are considered insufficiently secure to protect cardholder data. As a merchant, you may still use these older versions if you have a plan in place to migrate by June 2018, and a risk mitigation plan to compensate for the risks of using such older versions. Risk mitigation plans might include, for example, additional breach detection controls, or documented patches for all known vulnerabilities, or carefully selecting cipher suites to ensure no weak ciphers are permitted, or all of the above. If you have a Point of Sale or Point of Interaction system that does not have any known vulnerabilities, you may run these systems using older versions of SSL/TLS. The PCI council reserves the right to change guidance if new vulnerabilities associated with a particular POS or POI become known which jeopardize the cardholder environment.

 

If you are concerned about the risk to your e-commerce or mobile environment with upgrading your SSL to TLS 1.2 or higher, you can ask your online marketing department what the oldest versions of iOS, Android, Windows® and Mac® are that connect to your systems. Android has supported TLS 1.2 since version 4.1, although it is not enabled by default. As of version 4.4, Kitkat, released Oct. 2013, TLS 1.2 has been enabled by default. iOS 5, has supported TLS 1.2 since Oct. 2011. Windows 8 is the first Microsoft OS to support TLS 1.2 by default. Prior to that, you needed to manually configure TLS 1.2 support. A complete TLS table for Windows is available here: TLS version support. For Mac OS® you need to reach Mavericks, (Oct. 2013) to find a Mac computer with TLS 1.2 enabled by default.  

 

If all this versioning seems daunting, not to worry. Most modern browsers, which auto- upgrade, have been supporting TLS 1.2 for a long time.[5] The net/net is most organizations have nothing to fear with upgrading their secure communications layer to TLS 1.2.

 

Minor Clarifications

There are some additional nuances regarding the correct cipher suites to use to meet PCI DSS 3.2 requirements, which we will cover in a future post on using managed file transfer in cardholder environments.

 

If you are a PCI pro, you now have a good overview of the major changes coming in PCI DSS 3.2. Note that we didn’t address the additional changes a service provider needs to comply with, nor did we walk though all 12 requirements of PCI DSS and how they apply to IT organizations. For some of that, you should check out the Simplifying PCI Compliance for IT Professionals White Paper.

 

Questions about PCI DSS?

If you have questions regarding the latest version of PCI DSS, or what’s generally required from an IT perspective to maintain compliance, we urge you to join in the conversation on THWACK!

 


[1] Requirement 8.3

[2] Requirement 8.2

[3] PCI DSS 3.2 Appendix 2

[4] Internet Engineering Task Force RFC 2246, et. seq https://www.ietf.org/rfc/rfc2246.txt

[5] https://en.wikipedia.org/wiki/Template:TLS/SSL_support_history_of_web_browsers

Master of Your Virtual IT Universe: Trust but Verify at Any Scale

A Never Ending IT Journey around Optimizing, Automating and Reporting on Your Virtual Data Center

1609_VM_Ebook_photo_3.jpg

Automation

Automation is a skill that requires detailed knowledge, including comprehensive experience around a specific task. This is because you need that task to be fully encapsulated in a workflow script, template, or blueprint. Automation, much like optimization, focuses on understanding the interactions of the IT ecosystem, the behavior of the application stack, and the interdependencies of systems to deliver the benefits of economies of scale and efficiency to the overall business objectives. And it embraces the do-more-with-less edict that IT professionals have to abide by.

 

Automation is the culmination of a series of brain dumps covering the steps that an IT professional takes to complete a single task. These are steps that the IT pro is expected to complete multiple times with regularity and consistency. The singularity of regularity is a common thread in deciding to automate an IT process.

 

Excerpted from Skillz To Master Your Virtual Universe SOAR Framework

 

Automation in the virtual data center spans workflows. These workflows can encompass management actions such as provisioning or reclaiming virtual resources, setting up profiles and configurations in a one to many manner, and reflecting best practices in policies across the virtual data center in a consistent and scalable way.

 

Embodiment of automation

Scripts, templates, and blueprints embody IT automation. They are created from an IT professional’s best practice methodology - tried and true IT methods and processes. Unfortunately, automation itself cannot differentiate between good and bad. Therefore, automating bad IT practice will lead to unbelievable pain at scale across your virtual data centers.

 

To combat that from happening, keep automation stupid simple. First, automate at a controlled scale following the mantra, “Do no harm to your production data center environment.” Next, monitor the automation process from start to finish in order to ensure that every step executes as expected. Finally, analyze the results and use your findings to make necessary adjustments to optimize the automation process.

 

Automate with purpose

Start with an end goal in mind. What problems are you solving for with your automation work? If you can’t answer this question, then you’re not ready to automate any solution.

 

This post is a shortened version of the eventual eBook chapter. Stay tuned for elongated version in the eBook. Next week, I will cover reporting in the virtual data center.

I'm in Barcelona this week. It's been ten years since I've been here so it's hard for me to tell what has changed, but everything seems new. It still has a wonderful feel, and wonderful food. Yesterday I presented my session with David Klee to a packed room. I loved the energy of the session, and VMworld as a whole. I hope I get a chance to come back again.

 

Anyway, here's a bunch of stuff I found on the Intertubz in the past week that you might find interesting, enjoy!

 

You've been hacked. What are you liable for?

A nice reminder for those that aren't concerned about being hacked. You may have some liability you were not expecting.

 

SQL Server 2016 Express Edition in Windows containers

This may seem like a minor blip on the radar for some, but for those of us in the Microsoft Fanbois circles this is a huge step forward to what we believe will be total assimilation of all things data.

 

Amazon Partners With VMware to Extend Its Computing Cloud

Of course, SQL Server Express in a container pales in comparison to the big news of the week, and that is VMware partnering with Amazon to go head on with Microsoft for the enterprise hybrid cloud market.

 

How an IT Pro Kicked Hackers Off Surveillance Cameras

Score one for the good guys, I guess. Maybe we can get Rich a job working for the companies that continue to deploy unsecured IoT devices.

 

S.A.R.A. Seeks To Give Artificial Intelligence People Skills

Something tells me that maybe letting college students decide what is appropriate behavior for AI isn't the best idea.

 

Work Counts

Ever wonder how many people are just like you? Well, wonder no more!

 

Because I'll be away on Election day, I already filled out my absentee ballot and wanted to show you who I voted for:

 

bacon - 1.jpg

Whether seeking solutions to pressing problems, networking with like-minded professionals or researching products, professionals across nearly all industries benefit from participating in communities every day, and it’s no different for federal IT professionals.

 

Actively engaging in online IT communities can make all the difference by enabling connections with other federal IT pros who are encountering similar issues, providing educational content, offering insights from experts and providing a channel to share valuable feedback.

 

Peer-to-Peer Collaboration

 

Online IT communities offer diverse feedback on creative ideas, and productive discussions for problem-solving. You may find your “unique” problem is actually more common than originally thought.

 

For example, GovLoop features educational blogs, forums about everything from career to citizen engagement, training both online and in person, and group spaces to engage with content around topics such as technology, levels of government and occupations.

 

Another popular community is Reddit which often generates reams of answers from users who have experienced similar problems or are able to recommend resources to fix the issue. Reddit also features up-voting and down-voting on replies, making it easy to identify trusted answers from top-ranked users.

 

This sort of organic, peer-to-peer collaboration can help you solve problems quickly and with the confidence of having your answers come from trusted sources—professionals just like you from a wide variety of backgrounds and with vast ranges of expertise.

 

Direct Line to Vendors

 

Online communities can also connect you with the vendors whose products you rely on to get your job done. As you become more involved, you might transition from solely an information-seeker to an information-provider.

 

For example, the highly engaged end users that participate in SolarWinds’ more than 130,000 member strong IT community, THWACK, have influenced product directions and even go-to-market strategy based on their direct feedback and general discussion of industry trends and the company’s products. THWACK also features a Federal and Government space, which caters specifically to challenges that are unique to federal IT pros.

 

Career Development

 

Online IT communities also enable education that lends itself to career development. For example, many consider the SpiceWorks How-to forum a reliable place to develop IT skills and learn about the industry. Forums such as this and those found on thwack provide a venue for community members to learn best practices, get access to advice from experts in various fields and research developing trends that could impact your career in an environment where you have the option to ask questions and engage in conversations.

 

The Power of the Masses at Your Finger Tips

 

In summary, the access to a wide audience of peers that online IT communities provide can be invaluable to you as a federal IT pro. Membership and active participation within such communities can provide quick answers to problems, foster collaboration to ensure vendors are creating products that meet your needs and create new opportunities for your career development.

 

Find the full article on our partner DLT’s blog, TechnicallySpeaking.

1606_LEM_Integration_Guide_w160h600.png

The process of expanding operations or adopting new technology within an IT organization is sometimes met with caution. Rightfully so, given what’s at stake. Even the smallest configuration error, which can happen when you introduce new software or systems into a network, can spell disaster. Whether it leads to downtime, loss of data, the advent of security vulnerabilities, or compliance violations, the costs can be great for businesses of all sizes.

 

It’s not surprising, then, that some IT pros are hesitant to try out new software or test the latest SolarWinds offerings. But what it really boils down to is the fact that some IT pros don’t have the resources available to test a solution effectively without fearing these negative consequences.

 

What if I told you it was possible to test a solution like SolarWinds® Log & Event Manager (LEM) in a manner that was both safe and free for your business? Would you consider adding a powerful SIEM solution to your arsenal that tackles IT security and compliance? Well, you can! Introducing the LEM + GNS3® Integration Guide.

 

What is GNS3?

 

GNS3 is a multi-vendor tool that allows you to build, design, and test network configurations and software in a risk-free virtual environment. This technology eliminates the need for expensive physical testing by offering a network-attached or stand-alone virtual test bed, free of charge. With real-time network emulation, users can conduct proof-of-concept testing and troubleshooting on dynamic network configurations.

 

Download the GNS3 and SolarWinds LEM Integration Guide

 

Whether you’re a seasoned GNS3 pro that’s new to LEM, or a LEM user that’s interested in building a lab to experience the full functionality of the product within a safe and secure virtualized instance for testing or troubleshooting, this guide has something for you. In addition to instructing you on how to get started with VMWare®, Hyper-v®, GNS3, and LEM, this guide will help you understand some of the LEM basics to help ensure that you hit the ground running with this advanced security solution.

 

Click here for access to the guide and free 30-day trial of LEM!

 

To learn more about the partnership we’ve formed with GNS3, check out the GNS3 Group on THWACK: https://thwack.solarwinds.com/groups/gns3

Master of Your Virtual IT Universe: Trust but Verify at Any Scale

A Never Ending IT Journey around Optimizing, Automating and Reporting on Your Virtual Data Center

Optimization

Optimization is a skill that requires a clear end-goal in mind. Optimization focuses on understanding the interactions of the IT ecosystem, the behavior of the application stack, and the interdependencies of systems inside and outside their sphere of influence in order to deliver success in business objectives.

 

If one were to look at optimization from a theoretical perspective, each instantiation of optimization would be a mathematical equation with multi-variables. Think multivariate calculus as an IT pro tries to find the maxima as other variables change with respect to one another.

 

Excerpted from Skillz To Master Your Virtual Universe SOAR Framework

 

Optimization in the virtual data center spans the virtual data center health across resource utilization and saturation while encompassing resource capacity planning and resource elasticity. Utilization, saturation, and errors play key roles in the optimization skill. The key question is: what needs to be optimize in the virtual data center?

 

Resources scalability

Similar to other IT disciplines, optimization in the virtual environment boils down to optimizing resources i.e. do more with less. This oftentimes produces an over-commitment of resources and the eventual contention issues that follow the saturated state. If the contention persists over an extended period of time or comes too fast and too furious, errors usually crop up. And that’s when the “no-fun” time begins.

 

Resource optimization starts with tuning compute (vCPUs), memory (vRAM), network and storage. It extends to the application and its tunable properties through the hypervisor to the host and cluster.

 

Sub-optimal scale

 

vCPU and vRAM penalties manifests in saturation and errors, which lead to slow application performance and tickets being opened. There are definite costs to oversizing and undersizing virtual machines (VMs). Optimization seeks to find the fine line with respect to the entire virtual data center environment.

 

To optimize compute cycles, look for vCPU utilization and their counters as well processor queue length. For instance, in VMware, the CPU counters to examine are: %USED, %RDY and %CSTP. %USED shows how much time the VM spent executing CPU cycles on the physical CPU. %RDY defines the percentage of time a VM wanted to execute but had to wait to be scheduled by the VMKernel. %CSTP is the percentage of time that a SMP VM was ready to run but incurred delay because of co-vCPU scheduling contention. The performance counters in Microsoft are System\Processor Queue Length, Process\% Processor Time, Processor\%Processor Time and Thread\% Processor Time.

 

To optimize memory, look for memory swapping, guest level paging and overall memory utilization. For VMware, the counters are SWP/s and SWW/s while for Microsoft, the counter is pages/s. For Linux VMs, leverage vmstat and the swap counters si and so, swap in and swap out respectively.

 

Of course, a virtualization maestro needs to factor in hypervisor kernel optimization/reclamation techniques as well as the application stack and the layout of their virtual data center infrastructure into their optimization process. 

 

This post is a shortened version of the eventual eBook chapter. For a longer treatment, stay tuned for the eBook. Next week, I will cover automation in the virtual data center.

If you didn’t have a chance to join some 350+ of your fellow IT and Security Pros at our Shields Up Panel: Network Security Fundamentals, Fight! THWACKcamp session – you’re in luck, we took some notes.

 

Our panel was comprised of Eric Hodeen, Byron Anderson, our moderator Patrick Hubbard, and me, c1ph3r_qu33n.

 

Compliance v Security was the theme this year, and we tackled 4 big questions:

 

  • Have security practitioners and business owners figured out how to work with compliance schemes instead of fighting them? 
  • Are you more or less secure when you put compliance first?
  • What benefits (or harms) do compliance schemes and checklists offer?
  • If you are new to compliance, where do you start first? 

 

Our panelists felt that security and compliance teams are generally getting along better. However, there are still times when a business owner looks only at the penalties or risks of non-compliance and doesn’t consider the impact to the business of following a standard blindly. This can be especially true of highly proscriptive standards like DISA STIGS (Defense Information Systems Agency - Security Technical Implementation Guidelines)[1], or NERC CIP (North America Electric Reliability Corporation – Critical Information Protection)[2]. The challenge for IT and security pros, is to effectively communicate the potential business impacts and to give the business owner the ammunition to argue for a waiver or request a compensating control.  This way your organization can reach an optimum balance of compliance risk vs business needs.

 

One of the misconceptions that business owners may have is that a compliance scheme comprehends all the organizations security risk, so nothing further needs to be considered. As practitioners we know that compliance schemes are negotiated or promulgated standards that take time to change. Adjusting for changes to the threat landscape and addressing new technology innovations in a rapid fashion are challenges for compliance schemes. Furthermore no compliance standard considers every nuance of every IT environment.

 

So that is one of the risks of taking a compliance only approach.  But no one on the panel felt compliance schemes don’t have value.  Like other good guidelines and checklists, such as the OWASP top ten[3], or the SANS Critical Security Controls[4], compliance checklists can add value to an organization, especially as assurance.  The panel was divided however, on whether you start with a checklist, or you end with a checklist.  The answer may depend on your organizations maturity.  If you’ve been doing security for a while, using a checklist to validate your approach may add an extra layer of assurance. If you are new to security, however, a good checklist can be a great asset as you get started in this new IT discipline. 

 

Speaking of getting started, we all had different ideas about what is your most important first step. One of us said default passwords, which insidiously have a way of creeping back into the organization – whether it’s from a new install, or a reset of an existing device – default passwords still haunt us.  Another panelist thought end users were the biggest challenge, and maintaining good security required strong user participation. Anyone who has dealt with ransomware or phishing knows how important it is to keep users informed of likely risks and good security hygiene.

 

VIDEO: Shields Up Panel: Network Security Fundamentals, Fight!

 

We all agreed that THWACKcamp was great fun and we hope to see you all next year. If you’ve got an issue you’d like to see the experts take a stab at, post your questions and we’ll put them in the idea basket for next year.

 


 


[1] http://iase.disa.mil/stigs/Pages/index.aspx

[2] http://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx

[3] https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

[4] https://www.sans.org/critical-security-controls

I'm heading to VMworld in Barcelona next week, so if you are there let me know as I would love to talk data and databases with you. I'm co-presenting with David Klee and we are going to be talking about virtualizing your database server. I have not been to Barcelona in 10 years, I'm looking forward to seeing the city again, even briefly.

 

Here's a bunch of stuff I found on the Intertubz in the past week that you might find interesting, enjoy!

 

Cloud by the Megawatt: Inside IBM’s Cloud Data Center Strategy

If you are like me you will read this article and think to yourself "Wait, IBM has a Cloud?"

 

VMware, AWS Join Forces in Battle for Enterprise Cloud Market

This partnership marks an important shift in the market and should cause some concern for Microsoft. That being said, I also see this partnership as a last-ditch effort to keep VMware relevant before being absorbed completely by Dell.

 

Here are the 61 passwords that powered the Mirai IoT botnet

Proving once again that the cobbler's children have no shoes, we have an army of devices built by people that should know better, but don't put into practice the basics of security.

 

Twitter, Microsoft, Google and others say they haven’t scanned messages like Yahoo

I feel I have heard this story before, and I think I know how it ends.

 

Are microservices for you? You might be asking the wrong question

"Change for the sake of change is rarely a sensible use of time." If only they taught this at all the charm schools known as the MBA.

 

Latency numbers every programmer should know

Not a bad start at a complete list, and I would argue that more than just programmers should know these numbers. I've had to explain the speed of light to managers before.

 

7 Times Technology Almost Destroyed The World

Here's hoping the robots can act a bit more like humans when it counts the most.

 

Autumn has arrived here in New England, and that means apple picking is in full swing:

apple - 1.jpg

One of the questions that comes up during the great debate on the security of Internet of Things (IoT) is the responsibility of device manufacturers to support those devices. When we buy a refrigerator or a toaster, we expect that device to last through the warranty date and well beyond. Assuming it is a well-made unit it may last for a long time. But what about devices that only live as long as someone else wants them to?

Time's Up!

Remember Revolv? The smart hub for your home that was purchased by Nest? They stopped selling the device in October 2014 after being purchased, but a year and a half later they killed the service entirely. The Internet was noticeably upset and cried out that Google and Nest had done a huge disservice to their customers by killing the product. The outcry was so fierce that Nest ended up offering refunds for devices.

The drive to create new devices for IoT consumption is huge. Since most of them require some kind of app or service to operate correctly, it also stands to reason that these devices are reliant on the app to work properly. In the case of Revolv, once the app was shut down the device was no longer able to coordinate services and essentially became a huge paperweight. A few companies have chosen to create a software load that allows devices to function in isolated mode, but those are few and far between.

The biggest concern for security here is what happens when those devices that are abandoned by still function are left to their own ends. A fair number of the devices used in the recent IoT botnet attacks were abandonware cameras that were running their last software update. Those devices aren't going to have security holes patched or get new features. The fact that they work at all owes more to them being IP-based devices than anything else.

Killing In The Name Of IoT

However, if those manufacturers had installed a kill switch instead of allow the cameras to still work it would have prevented some of the chaos from the attack. Yes, the buyers of those cameras would have been irritated that the functionality was lost. But it could have made a massive security issue easier to deal with.

Should manufacturers be responsible for installing a software cut-out that allows a device to be remotely disabled when the support period expires? That's a thorny legal question. It opens the manufacturer up to lawsuits and class action filings about creating products with known defects. But it also raises the question of whether or now these same manufacturers should have a greater duty to the safety of the Internet.

And this isn't taking into account the huge issues with industrial IoT devices. Could you imagine what might happen if an insulin pump or an electrical smart meter was compromised and used as an attack vector? The damage could be catastrophic. Worse yet, even with a kill switch or cut-out to prevent transmission, neutering those devices renders them non-functional and potentially hazardous. Medical devices that stop working cause harm and possibly death. Electrical meters that go offline create hazards for people living in homes.

Decisions, Decisions

There's no easy answer to all these problems. Someone is going to be mad no matter what we decide. Either these devices live on in their last known configuration and can be exploited or they get neutered when they shutdown. The third option, having manufacturers support devices forever, isn't feasible either. So we have to make some choices here. We have to stand up for what we think is right and make it happen.

Make sure your IoT policy spells out what happens to out-of-support devices. Make sure your users know that you are going to implement a traffic kill switch if your policy spells it out. Knowledge is key here. Users will understand your reasons if communicated ahead of time. And using tools from Solarwinds to track those devices and keep tabs on them helps you figure out when it's time to implement those policies. Better to have it sorted out now than have to deal with a problem when it happens.

6622814393_7fbe9569da_o.jpg

Image courtesy of Spirit-Fire on Flickr

 

I'd think I'd like to mirror a session title from the recent ThwackCamp and subtitle this particular post "Don't Hate Your Monitoring." We all face an enormous challenge in monitoring our systems and infrastructure, and in part that's caused by an underlying conflict:

 

monitor_all.jpg Color_Overload.jpg

Image Courtesy D Sharon Pruitt

 

This is a serious problem for everybody. We want to monitor everything we possibly can. We NEED to monitor everything we can, because heaven help us if we miss something important because we don't have the data available. At the same time, we cannot possibly cope with the volume of information coming into our monitoring systems; it's overwhelming, and trying to manually sift through to find the alerts or data that actually matter to your business. And then we wonder why people are stressed, and why we have a love/hate relationship with our monitoring systems!

 

How can the chaos be minimized? Well, some manual labor is required up front, and after that it will be an iterative process that's never truly complete.

 

Decide what actually needs to be monitored

It's tempting to monitor every port on every device, but do you really need to monitor every access switch port? Even if you want to maintain logs for those ports for other reasons, you'll want to filter alerts for those ports so that they don't show up in your day to day monitoring. If somebody complains about bad performance, then digging in to the monitoring and alerting is a good next step (maybe the port is fully utilized, or spewing errors constantly), but that's not business critical, perhaps unless that's your CEO's switchport.

 

Focus on which alerts you generate in the first place

  • Use Custom Properties to allow identification of related systems so that alerts can be generated in an intelligent way using custom labels to identify related systems.
  • Before diving into the Alert Suppression tab to keep things quiet, look carefully at Trigger Conditions and try to add intelligent queries in order to minimize the generation of alerts in the first place. The trigger conditions allow for some quite complex nested logic which can really help make sure that only the most critical alerts hit the top of your list.
  • Use trigger conditions to suppress downstream alerts (e.g if a site router is down, don't trigger alerts from devices behind that router that are now inaccessible)

 

Suppress Alerts!

I know I just said not to dive into Alert Suppression, but it's still useful as the cherry on top of the cream that is carefully managed triggers.

  • It's better in general to create appropriate rules governing when an alert is triggered than to suppress it afterwards. Alert suppression is in some ways rather a blunt tool; if the condition is true, all alerts are suppressed.
  • One way to achieve downstream alert suppression is to add a suppression condition to devices on a given site that queries for the status of that site's edge router; if the router status is not "Up", the condition becomes true, and it should suppress the triggered alerts from that end device. This could also be achieved using Trigger Conditions, but it's cleaner in my opinion to do it in the Alert suppression tab. Note that I said "not Up" for the node status rather than "Down"; that means that the condition will evaluate to true for any status except Up, rather than explicitly requiring it to be only "Down". The more you know, etc...

 

Other features that may be helpful

  • Use dependencies! Orion is smart enough to know the implicit dependencies of, say, CPU and Memory on the Host in which they are found, but site or application-level dependencies are just a little bit trickier for Orion to guess. The Dependencies feature allows you to create relationships between groups of devices so that if the 'parent' group is down, alerts from the 'child' group can be automatically suppressed. This is another way to achieve downstream alert suppression at a fairly granular level.
  • Time-based monitoring may help for sites where the cleaner unplugs the server every night (or the system has a scheduled reboot), for example.
  • Where approptiate, consider using the "Condition must exist for more than <x> minutes" option within Trigger Conditions to avoid getting an alert for every little blip in a system. This theoretically slows down your notification time, but can help clear out transient problems before they disturb you.
  • Think carefully about where each alert type should be sent. Which ones are pager-worthy, for example, versus ones that should just be sent to a file for historical record keeping?

 

Performance and Capacity Monitoring

  • Baselining. As I discussed in a previous post, if you don't know what the infrastructure is doing when things are working correctly, it makes it even harder to figure out what's wrong when then there's a problem. This might apply to element utilization, network routing issues, and more. This information doesn't have to be in your face all the time, but having it to hand is very valuable.

 

BUT!

 

Everything so far talks about how to handle alerting when events occur. This is "reactive" monitoring, and it's what most of us end up doing. However, to achieve true inner peace we need to look beyond the triggers and prevent the event from happening in the first place. Obviously there's not much that can be done about power outages or hardware failures, but in other ways we can help ourselves by proactively.

 

Proactive monitoring basically means preempting avoidable alerts. Solarwinds software offers a number of features to forecast and plan for capacity issues before they become alerts. For example, Virtualization Manager can warn of impending doom for VMs and their hosts; Storage Resource Monitor tracks capacity trends for storage devices; Network Performance Manager can forecast exhaustion dates on the network; User Device Tracker can monitor switch port utilization. Basically, we need to use the forecasting/trending tools provided to look for any measurement that looks like it's going to hit a threshold, check with the business to determine any additional growth expected, then make plans to mitigate the issue before it becomes one.

 

Hating Our Monitoring

 

We don't have to hate our monitoring. Sadly, the tools tend to do exactly what we tell them to, and we sometimes expect a little too much from them in terms of having the intelligence to know which alerts are important, and which are not. However, we have the technology at our fingertips, and we can make our infrastructure monitoring dance, if not to our tune (because sometimes we need something that just isn't possible at the moment), then at least to the same musical genre. With careful tuning, alerting can largely be mastered and minimized. With proactive monitoring and forecasting, we can avoid some of those alerts in the first place. After all -- and without wishing to sound too cheesy -- the best alert is the one that never triggers.

For the unprepared, managing your agency’s modern IT infrastructure with all its complexity can be a little scary. Evolving mandates, the constant threat of a cyber-attack and a connected workforce that demands access to information when they want it, where they want it, places more pressure on the government’s IT professionals than ever. And at the heart of it all is still the network.

 

At SolarWinds we know today’s government IT pro is a Bear Grylls-style survival expert. And in true Man vs. Wild fashion, the modern IT pro needs a network survival guide to be prepared for everything.

 

Assess the Network

 

Every explorer needs a map. IT Pros are no different, and the map you need is of your network. Understanding your networks capabilities, needs and resources is the first step of network survival.

 

Ask yourself the following questions:

 

  • How many sites need to communicate?
  • Are they located on the intranet, or externally and accessed via a datacenter?
  • Is the bulk of my traffic internal, or is it all bound for the Internet? How about any key partners and contractors?
  • Which are the key interfaces to monitor?
  • Where should deep packet inspection (DPI) agents go?
  • What is the scope and scale of what needs to be monitored?

 

Acknowledge that Wireless is the Way

 

What’s needed are tools like wireless heat maps to manage over-subscribed access points and user device tracking tools that allow agencies to track rogue devices and enforce their BYOD policies. The problem is that many of these tools have traditionally been cost-prohibitive, but newer options open doors to implementing these technologies you might not be aware of.

 

Prepare for the Internet of Things

 

The government can sometimes be slower to adopt new technology, but agencies are increasingly experimenting with the Internet of Things. How do you overcome these challenges? True application firewalls can untangle the most sneaky device conversation, get IP address management under control and get gear ready for IPv6. They can also classify and segment your device traffic; implement effective quality of service to ensure that critical business traffic has headroom; and of course, monitor flow.

 

Understand that Scalability is Inevitable

 

It is important to leverage capacity forecasting tools, configuration management, and web-based reporting to be able to predict and document scalability and growth needs so you can justify your budget requests and stay ahead of infrastructure demands.

 

Just admit it already—it’s All About the Application

 

Everything we do is because of and for the end-users. The whole point of having a network is to achieve your mission and support your stakeholders. Seek a holistic view of the entire infrastructure, including the impact of the network on application issues and don’t silo network management anymore.

 

A Man is Only as Good as His Tools

 

Having sophisticated network monitoring and management tools is an important part of arming IT professionals for survival, but let’s not overlook the need for certifications and training, so the tools can be used to effectively manage the network.

 

Revisit, Review, Revise

 

What’s needed to keep your network running at its peak will change, so your plans need to adapt to keep up. Constantly reexamine your network to be sure that you’re addressing changes as they arise. Successful network management is a cyclical process, not a one-way journey.

 

Find the full article on Federal Technology Insider.

A Neverending IT Journey around Optimizing, Automating, and Reporting on Your Virtual Data Center

Introduction

 

The journey of one begins with a single virtual machine (VM) on a host. The solitary instance in a virtual universe with the vastness of the data center as a mere dream in the background. By itself, the VM is just a one-to-one representation of its physical instantiation. But virtualized, it has evolved, becoming software defined and abstracted. It’s able to draw upon a larger pool of resources should its host be added to a cluster. With that transformation, it becomes more available, more scalable, and more adaptable for the application that it is supporting.

 

The software abstraction enabled by virtualization provides the ability to quickly scale across many axes without scaling their overall physical footprint. The skills required to do this efficiently and effectively are encompassed by optimization, automation, and report. The last skill is key because IT professionals cannot save their virtual data center if no one listens to and seeks to understand them. Moreover, the former two skills are complementary. And as always, actions speak louder than words.

 

SOAR.PNG

 

In the following weeks, I will cover practical examples of optimization, automation, and reporting in the virtual data center. Next week will cover optimization in the virtual data center. The week after will follow with automation. And the final week will discuss reporting. In this case, order does matter. Automation without optimization consideration will lead to work being done that serves no business-justified purpose. Optimization and automation without reporting will lead to insufficient credit for the work done right, as well as misinforming decision makers of the proper course of actions to take.

 

I hope you’ll join me for this journey into the virtual IT universe.

Last week was Microsoft Ignite in Atlanta. I had the privilege of giving two presentations, one of which was titled "Performance Tuning Essentials for the Cloud DBA." I was thinking of sharing the slides, but the slides are just there to enhance the story I was telling. So I've decided instead to share the narrative here in this post today, including the relevant images. As always, you're welcome.

 

I started with two images from the RightScale 2016 State of the Cloud Report:

Slide05.jpg

 

The results of that survey help to show that hybrid IT is real, it's here, and it is growing. Using that information, combined with the rapid advances we see in the technology field with each passing year, I pointed out how we won't recognize IT departments in five years.

 

For a DBA today, and also the DBA in five years, it shouldn't matter where the data resides. The data can be either down the hall or in the cloud. That's the hybrid part, noted already. But how does one become a DBA? Many of us start out as accidental DBAs, or accidental whatevers, and in five years there will be accidental cloud DBAs. And those accidental cloud DBAs will need help. Overwhelmed at first, the cloud DBA will soon learn to focus on their core mission:

 

paid.png

 

Once the cloud DBA learns to focus on his or her core mission (recovery), they can start learning how to do performance troubleshooting (because bacon ain't free). I believe that when it comes to troubleshooting, it is best to think in buckets. For example, if you are troubleshooting a virtualized database server workload, the first question you should be asking yourself is, "Is the issue inside the database engine or is it external, possibly within the virtual environment?" In time, the cloud DBA learns to think about all kinds of buckets: virtual layers, memory, CPU, disk, network, locking, and blocking. Existing DBAs already have these skills. But as we transition to being cloud DBAs, we must acknowledge that there is a gap in our knowledge and experience.

 

That gap is the network.

 

Most DBAs have little to no knowledge of how networks work, or how network traffic is utilized. A database engine, such as SQL Server, has little knowledge of any network activity. There is no DMV to expose such details, and a DBA would need to collect O/S level details on all the machines involved. That's not something a DBA currently does; we take networks for granted. To a DBA, networks are like the plumbing in your house. It's there, and it works, and sometimes it gets clogged.

 

But the cloud demands that you understand networks. Once you go cloud, you become dependent upon networks working perfectly, all the time. One little disruption, because someone didn't call 1-800-DIG-SAFE before carving out some earth in front of your office building, and you are in trouble. And it's more than just the outage that may happen from time to time. No. You need to know about your network as a cloud DBA for the following reasons: RPO, RTO, SLA, and MTT. I've talked before about RPO ands RTO here, and I think anyone reading this would know what SLA means. MTTI might be unfamiliar, though. I borrowed that from adatole. It stands for Mean Time To Innocence, and it is something you want to keep as short as possible, no matter where your data resides.

 

You may have your RPO and RTO well-defined right now, but do you know if you can meet those metrics right now? Turns out the internet is a complicated place:

 

Slide20.jpg

 

Given all that complexity, it is possible that data recovery may take a bit longer than expected. When you are a cloud DBA, the network is a HUGE part of your recovery process. The network becomes THE bottleneck that you must focus on first and foremost in any situation. In fact, when you go cloud, the network becomes the first bucket you need to consider. The cloud DBA will need to be able to know and understand in five minutes or less if the network is the issue first, before spending any time on trying to tune a query. And that means the cloud DBA is going to have to understand what is clogging that pipe:

 

Slide23.jpg

 

Because when your phone rings, and the users are yelling at you that the system is slow, you will want to know that the bulk of the traffic in that pipe is Pokemon Go, and not the data traffic you were expecting.

 

Here's a quick list of tips and tricks to follow as a cloud DBA.

 

  1. Use the Azure Express! Azure Express Route is a dedicated link to Azure, and you can get it from Microsoft or a managed service provider that partners with Microsoft. It's a great way to reduce the complex web known as the internet, and give you better throughput. Yes, it costs extra, but only because it is worth the price.
  2. Consider Alt-RPO, Alt-RTO. For those times when your preferred RPO and RTO needs won't work, you will want an alternative. For example, you have an RPO of 15 minutes, and an RTO of five minutes. But the network is down, so you have an Alt-RPO of an hour and an Alt-RTO of 30 minutes, and you are storing backups locally instead of in the cloud. The business would rather be back online, even to the last hour, as opposed to waiting for the original RPO/RTO to be met.
  3. Use the right tools. DBAs have no idea about networks because they don't have any tools to get them the details they need. That's where a company like SolarWinds comes in to be the plumber and help you unclog those pipes.

 

Thanks to everyone that attended the session last week, and especially to those that followed me back to the booth to talk data and databases.

476.JPG

In previous posts, I've written about making the best of your accidental DBA situation.  Today I'm going to give you my advice on the things you should focus on if you want to move from accidental DBA to full data professional and DBA.

 

As you read through this list, I know you'll be thinking, "But my company won't support this, that's why I'm an accidental DBA." You are 100% correct.  Most companies that use accidental DBAs don't understand the difference between developer and DBA, so many of these items will require you to take your own initiative. But I know since you are reading this you are already a great candidate to be that DBA.

 

Training

 

Your path to becoming a DBA has many forks, but I'm a huge fan of formal training. This can be virtual or in-person. By virtual I mean a formal distance-learning experience, with presentations, instructor Q&A, hands-on labs, exams and assignments. I don't mean watching videos of presentations. Those offerings are covered later.

 

Formal training gives you greater confidence and evidence that you learned a skill, not that you only understand it. Both are important, but when it comes to that middle-of-the-night call alerting you that databases are down, you want to know that you have personal experience in bringing systems back online.

 

Conferences

Conferences are a great way to learn, and not just from invited speakers. Speaking with fellow attendees, via the hallway conferences that happen in conjunction with the formal event,  gives you the opportunity to network with people who boast a range of skill levels. Sharing resource tips with these folks is worth the price of admission.

 

User Groups and Meetups

I run the Toronto Data Platform and SQL Server Usergroup and Meetup, so I'm a bit biased on this point. However, these opportunities to network and learn from local speakers are often free to attend.  Such a great value! Plus, there is usually free pizza. Just saying. You will never regret having met other data professionals in your local area when you are looking for you next project.

 

Online Resources

Online videos are a great way to supplement your formal training. I like Pluralsight because it's subscription-based, not per video. They offer a free trial, and the annual subscription is affordable, given the breadth of content offered.

 

Webinars given by experts in the field are also a great way to get real-world experts to show and tell you about topics you'll need to know. Some experts host their own, but many provide content via software vendor webinars, like these from SolarWinds.

 

Blogs

Blogs are a great way to read tips, tricks and how tos. It's especially important to validate the tips you read about. My recommendation is that you validate any rules of thumb or recommendations you find by going directly to the source: vendor documentation and guidelines, other experts, and asking for verification from people you trust. This is especially true if the post you are reading is more than three months old.

 

But another great way to become a professional DBA is to write content yourself.  As you learn something, get hands-on experience using it, write a quick blog post. Nothing makes you understand a topic better than trying to explain it to someone else.

 

Tools

I've learned a great deal more about databases by using tools that are designed to work with them. This can be because the tools offer guidance on configuration, do validations and/or give you error messages when you are about to do something stupid.  If you want to be a professional DBA, you should be giving Database Performance Analyzer a test drive.  Then when you see how much better it is at monitoring and alerting, you can get training on it and be better at databasing than an accidental DBA with just native database tools.

 

Labs

The most important thing you can do to enhance your DBA career is to get hands-on with the actual technologies you will need to support. I highly recommend you host your labs via the cloud. You can get a free trial for most. I recommend Microsoft Azure cloud VMs because you likely already have free credits if you have an MSDN subscription. There's also a generous 30-day trial available.


I recommend you set up VMs with various technologies and versions of databases, then turn them off.  With most cloud providers, such as Azure, a VM that is turned off has no charge except for storage, which is very inexpensive.  Then when you want to work with that version of software, you turn on your VM, wait a few minutes, start working, then turn it off when you need to move on to another activity.

 

The other great thing about working with Azure is that you aren't limited to Microsoft technologies.  There are VMs and services available for other relational database offerings, plus NoSQL solutions. And, of course, you can run these on both Windows and Linux.  It's a new Microsoft world.

 

The next best thing about having these VMs ready at your fingertips is that you can use them to test new automation you have developed, test new features you are hoping to deploy, and develop scripts for your production environments.

 

Think Like a DBA, Be a DBA

The last step is to realize that a DBA must approach issues differently than a developer, data architect, or project manager would. A DBA's job is to keep the database up and running, with correct and timely data.  That goal requires different thinking and different methods.  If you don't alter your problem-management thinking, you will likely come to different cost, benefit, and risk decisions.  So think like a DBA, be a DBA, and you'll get fewer middle-of-the-night calls.

Thanks to everyone that stopped by the booth at Microsoft Ignite last week, it was great talking data and databases. I'm working on a summary recap of the event so look for that as a separate post in Geek Speak later this week.

 

In the meantime, here's a bunch of stuff I found on the Intertubz in the past week that you might find interesting, enjoy!

 

Will IoT become too much for IT?

The IoT is made up of billions of unpatched and unmonitored devices, what could possibly go wrong?

 

Largest ever DDoS attack: Hacker makes Mirai IoT botnet source code freely available on HackForums

This. This is what could go wrong.

 

Clinton Vows To Retaliate Against Foreign Hackers

I don't care who you vote for, there is no way you can tell me you think either candidate has any idea what to do about the Cyber.

 

Marissa Mayer Forced To Admit That She Let Your Mom’s Email Account Get Hacked

For example, here is one of the largest tech companies making horrible decisions about 500 MILLION accounts being hacked. I have little faith in anyone when it comes to data security except for Troy Hunt. Is it too late to elect him Data Security Czar?

 

California OKs Self-Driving Vehicles Without Human Backup

Because it seemed weird to not include yet another link about self-driving cars. 

 

BlackBerry To Stop Making Smartphones

The biggest shock I had while reading this was learning that BlackBerry was still making smartphones. 

 

Fake island Hy-Brasil printed for 500 years

I'm going with the theory that this island was put there in order to catch people making copies of the original work, but this article is a nice reminder why crowd-sourced pieces of work (hello Wikipedia) are often filled with inaccurate data.

 

At Microsoft Ignite last week patrick.hubbard found this documentation gem:

 

ports - 1.jpg

I’ve come to a crossroads. Regular SolarWinds Lab viewers and new THWACKcamp attendees might have noticed my fondness for all things programmable. I can’t help smiling when I talk about it; I came to enterprise IT after a decade as a developer. But if you run across my spoor outside of SolarWinds, you may notice a thinly-veiled, mild but growing despair. On the flight back from Microsoft® Ignite last week, I reluctantly accepted reality: IT, as we know it, will die.

 

Origin of a Bummer

 

On one hand, this should be good news because a lot of what we deal with in IT is, quite frankly, horrible and completely unnecessary. I’m not referring to managers who schedule weekly all-hands that last an hour because that’s the default meeting period in Outlook®. Also not included are 3:00am crisis alerts that prompt you to stumble to the car with a baseball hat because the issue is severe enough to take out the VPN, too. Sometimes it’s okay to be heroic, especially if that means you get to knock off early at 2:00pm.

 

The perennial horror of IT is boredom. Tedium. Repetitive, mindless, soul-crushing tasks that we desperately want to remediate, delegate, or automate, but can’t because there’s no time, management won’t let us, or we don’t have the right tools.

 

All of this might be okay, except for two things: accelerating complexity and the move to the cloud. The skinny jeans-clad new kids can’t imagine any other way, and many traditional large enterprise IT shops also hit the cloud hookah and discovered real benefits. Both groups recognized dev as a critical component, and their confidence comes from knowing that they can and will create whatever their IT requires to adapt and realize the benefits of new technology.

 

No, the reason this is a bummer – if only for five-or-so-years – is that it’s going to hit the people I have the greatest affinity for the hardest: small to medium business IT, isolated independent department IT in large organizations, and superstar admins with deep knowledge in highly specialized IT technology. In short, those of you who’ve worn all the hats I have at one point or another over the last two decades.

 

I totally understand the reasonable urge to stand in front of a gleaming Exchange rack and tell all the SaaS kids to get off your lawn. But that’s a short-term solution that won’t help your career. In fact, if you’re nearing or over 50, this is an especially critical time to be thinking about the next five years. I watched some outstanding rack-and-stack app infrastructure admins gray out and fade away because they resisted the virtualization revolution. Back then, I had a few years to get on board, gain the skills to tame VMs, and accelerate my career.

 

This time, however, I’m actively looking ahead, transitioning my education and certification, and working in production at least a little every week with cloud and PaaS technology. I’m also talking to management about significant team restructuring to embrace new techniques.

 

Renewed Mission

 

Somewhere over Louisiana I accepted the macro solution that we’ll each inevitably face, but also my personal role in it. We must tear down IT as we know it, and rebuild something better suited to a data center-less reality. We’ll abandon procedural ticket-driven change processes, learn Sprints, Teaming, Agile, and, if we’re lucky, get management on board with a little Kanban, perhaps at a stand-up meeting.

 

And if any or all of that sounds like New Age, ridiculous mumbo jumbo, that’s perfectly okay. That is a natural and understandable reaction of pragmatic professionals who need to get tish done. My role is to help my peers discover, demystify, and develop these new skills. Further, it’s to help management stop thinking of you as rigidly siloed and ultimately replicable when new technology forces late-adopting organizations into abrupt shifts and spasms of panicked change.

 

But more than that, if these principles are even partially adopted to enable DevOps-driven IT, life is better. The grass really is greener. I’ve seen it, lived it, and, most surprising to this skeptical, logical, secret introvert, I’ve come to believe it. My job now is to combine my fervor for the tools we’ll use with a career of hard-won IT lessons and do everything I can to help. Don’t worry. This. Is. Gonna. Be. Awesome.

I’ve come to a crossroads. Regular SolarWinds Lab viewers and new THWACKcamp attendees might have noticed my fondness for all things programmable. I can’t help smiling when I talk about it; I came to enterprise IT after a decade as a developer. But if you run across my spoor outside of SolarWinds, you may notice a thinly-veiled, mild but growing despair. On the flight back from Microsoft® Ignite last week, I reluctantly accepted reality: IT, as we know it, will die.

 

Origin of a Bummer

 

On one hand, this should be good news because a lot of what we deal with in IT is, quite frankly, horrible and completely unnecessary. I’m not referring to managers who schedule weekly all-hands that last an hour because that’s the default meeting period in Outlook®. Also not included are 3:00am crisis alerts that prompt you to stumble to the car with a baseball hat because the issue is severe enough to take out the VPN, too. Sometimes it’s okay to be heroic, especially if that means you get to knock off early at 2:00pm.

 

The perennial horror of IT is boredom. Tedium. Repetitive, mindless, soul-crushing tasks that we desperately want to remediate, delegate, or automate, but can’t because there’s no time, management won’t let us, or we don’t have the right tools.

 

All of this might be okay, except for two things: accelerating complexity and the move to the cloud. The skinny jeans-clad new kids can’t imagine any other way, and many traditional large enterprise IT shops also hit the cloud hookah and discovered real benefits. Both groups recognized dev as a critical component, and their confidence comes from knowing that they can and will create whatever their IT requires to adapt and realize the benefits of new technology.

 

No, the reason this is a bummer – if only for five-or-so-years – is that it’s going to hit the people I have the greatest affinity for the hardest: small to medium business IT, isolated independent department IT in large organizations, and superstar admins with deep knowledge in highly specialized IT technology. In short, those of you who’ve worn all the hats I have at one point or another over the last two decades.

 

I totally understand the reasonable urge to stand in front of a gleaming Exchange rack and tell all the SaaS kids to get off your lawn. But that’s a short-term solution that won’t help your career. In fact, if you’re nearing or over 50, this is an especially critical time to be thinking about the next five years. I watched some outstanding rack-and-stack app infrastructure admins gray out and fade away because they resisted the virtualization revolution. Back then, I had a few years to get on board, gain the skills to tame VMs, and accelerate my career.

 

This time, however, I’m actively looking ahead, transitioning my education and certification, and working in production at least a little every week with cloud and PaaS technology. I’m also talking to management about significant team restructuring to embrace new techniques.

 

Renewed Mission

 

Somewhere over Louisiana I accepted the macro solution that we’ll each inevitably face, but also my personal role in it. We must tear down IT as we know it, and rebuild something better suited to a data center-less reality. We’ll abandon procedural ticket-driven change processes, learn Sprints, Teaming, Agile, and, if we’re lucky, get management on board with a little Kanban, perhaps at a stand-up meeting.

 

And if any or all of that sounds like New Age, ridiculous mumbo jumbo, that’s perfectly okay. That is a natural and understandable reaction of pragmatic professionals who need to get tish done. My role is to help my peers discover, demystify, and develop these new skills. Further, it’s to help management stop thinking of you as rigidly siloed and ultimately replicable when new technology forces late-adopting organizations into abrupt shifts and spasms of panicked change.

 

  But more than that, if these principles are even partially adopted to enable DevOps-driven IT, life is better. The grass really is greener. I’ve seen it, lived it, and, most surprising to this skeptical, logical, secret introvert, I’ve come to believe it. My job now is to combine my fervor for the tools we’ll use with a career of hard-won IT lessons and do everything I can to help. Don’t worry. This. Is. Gonna. Be. Awesome.

Simplifying network management is a challenging task for any organization, especially those that have chosen a best of breed route and have a mix of vendors. I ask my customers to strive for these things when looking to improve their network management and gain some efficiency.

 

  1. Strive for a Single Source of Truth—As an administrator there should be a single place that you manage information about a specific set of users or devices (e.g. Active Directory as the only user database). Everything else on the network should reference that source for its specific information. Multiple domains or maintaining a mix of LDAP and RADIUS users makes authentication complicated and arguably may make your organization less secure as maintaining these multiple sources is burdensome. Invest in doing one right and exclusively.
  2. Standardization—A tremendous amount of time savings can be found by eliminating one-off configurations/sites, situations, etc. An often overlooked part in this time savings is in consulting and contractor costs, the easier it is for an internal team to quickly identify a location, IDF, device, etc. the easier it will be for your hired guns as well. A system should be in place for IP address schemes, VLAN numbering, naming conventions, low voltage cabling, switch port usage, redundancy, etc.
  3. Configuration Management—Creating a plan for standardization is one thing, ensuring it gets executed is tougher. There are numerous tools that allow for template-based configuration or script-based configuration. If your organization is going to take the time to standardize the network, it is critical that it gets followed through on the configuration side. DevOps environments may turn to products like Chef, Puppet or Ansible to help with this sort of management.
  4. Auditing and Accountability—Being proactive about policing these efforts is important and to do that some sort of accountability needs to be in place. This should happen in change control meetings to ensure changes are well thought out and meet the design standards, safeguards are in place to ensure the right people are making the changes and that those changes can be tracked back to a specific person (no shared “admin” or “root” accounts!) to help ensure that all of the hard work put in to this point is actually maintained. New hires should be trained and indoctrinated in the system to ensure that they follow the process.

 

Following these steps will simplify the network, increase visibility, speed troubleshooting, and even help security. What steps have you taken in your environment to simplify network management?  We’d love to hear it!

With Data breaches and insider threats increasing, a vulnerable network can be an ideal entry point that puts sensitive data at risk. As a result, federal IT professionals, like yourself, need to worry not only about keeping people out, but keeping those who are already in from doing damage.

 

But while you can’t completely isolate your network, you can certainly make sure that all access points are secure. To do so, you’ll need to concentrate on three things: devices, traffic, and planning.

 

Checkpoint 1: Monitor embedded and mobile devices

 

Although you may not know everything about what your HVAC or other systems with embedded devices are doing, you still need to do what you can to manage them. This means frequent monitoring and patching, which can be accomplished through network performance monitors and patch management systems. The former can give you detailed insight into fault, performance, security and overall network availability, while the latter can provide automated patching and vulnerability management.

 

According to a recent study by Lookout, mobile devices continue to be extremely prevalent in federal agencies, but an alarming number of them are unsanctioned devices that are being used in ways that could put information at risk. A staggering eighty percent of respondents in a SolarWinds survey believe that mobile devices pose some sort of threat to their agency’s security. But, you can gain control of the situation with user device tracking software, which can identify the devices that are accessing your network, alert you to rogue devices, and track them back to individual users.

 

Checkpoint 2: Keep an eye on network traffic

 

Network traffic analysis and bandwidth monitoring solutions can help you gain the visibility you may currently lack. You can closely monitor bandwidth and traffic patterns to identify any anomalies that can be addressed before they become threats. Bandwidth can be traced back to individual users so you can see who and what might be slowing down your network and you can receive automated alerts to let you know of any red flags that might arise.

 

Checkpoint 3: Have a response plan in place

 

While it’s a downer to say you should always assume the worst, it’s sadly true. There’s a bright side, though! If you assume a breach is going to happen, you’re more likely to be well prepared when it does. If one has happened, you’ll be more likely to find it.

 

This will require developing a process for responding to attacks and identifying breaches. Begin by asking yourself, “given my current state, how quickly would I be able to identify and respond to an attack?” Follow that up with, “what tools do I have in place that will help me prevent and manage a breach?”

 

If you’re uncomfortable with the answers, it’s time to begin thinking through a solid, strategic approach to network security – and start deploying tools that will keep your data from walking out the front door.

 

Find the full article on our partner DLT’s blog, TechnicallySpeaking.

When it comes to the technical aspects of PCI DSS, HIPAA, SOX, and other regulatory frameworks, the goals are often the same: to protect the privacy and security of sensitive data. But the motivators for businesses to comply with these regulatory schemes varies greatly.

Penalties for Noncompliance

 

Regulatory Compliance Framework

Industry

Scope

Year

Established

Governing Body

Penalties

PCI DSS

Payment Card Industry Data Security Standards

Applies to any organization that accepts credit cards for payment

2004

Payment Card Industry Security Standards Council (PCI SSC)[1]

  • Fines up to $200,000/violation
  • Censure from credit card transactions

HIPAA

Health Insurance Portability and Accountability Act[2]

Applies to healthcare-related businesses deemed either covered entities or business associates by law

1996

The Department of Health and Human Services (HHS) Office for Civil Rights (OCR)

  • Up to $50,000 per record
  • Maximum on $1.5M/year

SOX

Sarbanes–Oxley Act

 

Applies to any publicly traded company

2002

The Security and Exchange Commission (SEC)

  • Fines up to $5M
  • Up to 20 years in prison

NCUA

National Credit Union Association

Applies to credit unions

1934
(r. 2013)

NCUA is the federal agency assigned to enforce a broad range of consumer regulations that apply to federally chartered credit unions and, to a lesser degree, federally insured state chartered

credit unions.[3]

  • Dissolve your credit union
  • Civil money penalties

GLBA

Gramm-Leach-Bliley Act

Applies to financial institutions that offer products or services to individuals, like loans, financial or investment advice, or insurance

1999

Federal Trade Commission (FTC)

  • $100,000 per violation
  • Up to 5 years in prison

FISMA

Federal Information Security Management Act

Applies to the federal government and companies with government contracts

2002

Office of Management and Budget (OMB), a child agency of the Executive Office of the President of the United States

  • Loss of federal funding
  • Censure from future contracts

 

 

This list only represents a fraction of the entire regulatory compliance structures that govern the use of information technology and processes involved in maintaining the confidentiality, integrity, and availability of sensitive data of all types.

 

Yes, there are monetary fines for noncompliance or unlawful uses or disclosures of sensitive information – the chart above provides an overview of that – and for most, that alone offers plenty of incentive to comply. But beyond this, businesses should be aware of the many other consequences that can result from non-compliance or any other form of negligence that results in a breach.

 

Indirect Consequences of Noncompliance

 

Noncompliance whether validated by audits, or discovered as the result of a breach, can be devastating for a business. Though, when a breach occurs, its impact often extends well beyond the fines and penalties levied by enforcement agencies. It can include the cost of detecting the root cause of a breach, remediating it, and notifying those affected. Further, the cost balloons when you factor in legal expenditures, business-related expenses, and loss of revenues faced by damaged brand reputation.

 

As if IT pros did not have enough to worry about these days, yes, unfortunately compliance too falls into their laps. But depending on the industries they serve and the types of data their business interacts with, what compliance actually entails can be quite different.

 

Regulatory Compliance and the Intersection with IT

 

Without a doubt, there are many aspects of data security standards and compliance regulations that overshadow everything from IT decision-making and purchasing, to configurations, and the policies and procedures a company must create and enforce to uphold this important task.

 

Organizations looking to comply with a particular regulatory framework must understand that no one solution, and no one vendor, can help prepare them for all aspects of compliance. It is important that IT professionals understand the objectives of every compliance framework they are subject to, and plan accordingly. 

 


[1] The PCI SSC was founded by American Express, Discover Financial Services, JCB International, MasterCard Worldwide and Visa Inc. Participating organizations include merchants, payment card-issuing banks, processors, developers, and other vendors.

[2] The Health Information Technology for Economic and Clinical Health (HITECH) Act, which was enacted as part of the American Recovery and Reinvestment Act of 2009, prompted the adoption of Health Information Technology. This act is recognized as giving “teeth” to HIPAA as it established stricter requirements by establishing the Privacy, Security, and Breach Notification Rules, as well as stiffer penalties for violations. The HIPAA Omnibus Rule, which went into effect in 2013, further strengthened the OCR’s ability to enforce compliance, and clearly defined the responsibility of compliance for all parties that interact with electronic protected health information (ePHI).

[3] It is important to note that in the financial world, guidance from the Federal Financial Institute of Examiners Council (FFIEC) to a bank is mandatory because the guidance specifies the standards that the examiner will use to evaluate the bank. Credit unions technically fall under a different regulator than banks, however, the National Credit Union Association closely follows the FFIEC guidance.

 

1606_LEM_Compliance-Campaign_WP_640x200_Intro.png

Filter Blog

By date: By tag: