Skip navigation
1 14 15 16 17 18 Previous Next

Geek Speak

2,159 posts

As the 2016 year is coming to the end, I've looked back and wow it has been the year of the upgrades for me at my day job. While they have all been successful (some took longer than expected ) , there were bumps, tears and even some screaming to get to the finish line. My team and I are seasoned IT professionals but that didn't stop us from making mistakes and assumptions. What I've learned after doing 5 major upgrades this year is that never assume and always be prepared for the worst and hope for the best.

 

As you embark on the annual journey of upgrades there are so many factors to look at to make sure that it is successful. While it may seem trivia at times depending on the upgrade you do, it never hurts to go through a basic upgrade run through, like a playbook or if you have a PMO work with a Project Manager. Project Managers can be life savers! But you do not need a Project Manager if you take the time to go through and gather all the information and requirements as part of your planning.

 

After looking back through all the upgrades I've done this year,  I decided to write this post and hope that we can all learn a little something with the lessons we've learned and can be avoided by others.

Let’s get back to basics…

Once we start talking upgrades let's go back to the basics and answer the five “Ws” and get some requirements; WHAT, WHY, WHO, WHERE, and WHEN. Understanding those basic requirements goes a long way. It provides the basic foundation for understanding the situation and what all needs to be done.


WHAT- Let’s first ask what are we upgrading? Is this a server operating system upgrade or an application upgrade? Determining the type of upgrade is vital because this will affect the answers to your other requirements. Once you know what you are upgrading you will need to determine if your current environment can support the upgrade. Depending on what you are upgrading, it can feel like opening a can of worms, as you find you may need to upgrade other systems to make sure it’s compatible with the upgrade you are trying to complete. You may also find out that the upgrade reaches beyond your realm of responsible and crosses over into our departments and function. A “simple” application upgrade can end up costing millions if your current environment does not support all components.

 

Some examples questions to ask:

  1. If you're doing an application upgrade does your current hardware specs meet the recommendations for the newer version? If it does not, you may need to invest in new hardware.
  2. Is this an operating system upgrade?
  3. Is this an in-place upgrade or parallel environment?
  4. Or a complete server replacement?
  5. Can you go direct to the new version or will you need to install CU’s to get current?
  6. Can your current network infrastructure support the upgrade? Does it require more bandwidth?
  7. If you are using Load Balancers or Proxy Servers, do those support the upgrade?
  8. Are there client applications that connect to your systems and are you running supported versions of the client applications?
  9. Do you have Group Policies that need to be modified?
  10. What other applications “connect” maybe impacted?
  11. Are there any legacy customizations in the will be impacted?
  12. Will there licensing impacts or changes with the upgrade?

 

Sample upgrade scenario:

 

An application like Exchange Server has far reaching impacts beyond just the application. If an Exchange DAG is implemented the network must meet certain requirements to satisfy successful replication between the databases across datacenters. Working with your network team ensures your requirements are met. You will may possibly need the storage team if you are using a SAN for your storage which may require new hardware and we all know that can be a project in itself to upgrade a SAN.

 

An often overlooked item is the client connection to exchange. What version of Outlook are users using to connect to get to their email? If you are using an unsupported version of outlook users may have issues connecting to email. Which we all know would be a nightmare to deal with. Let’s look at the impact of outlook versions to an exchange upgrade. If your outlook versions are not supported, you will need to work with the desktop teams to get everyone to a supported version.  This can be costly, from the support to implementing and deploying the upgrade to supported outlook versions and then depending on how your Microsoft Office is licensed you may need to buy additional licenses and we all know that isn’t cheap. 

 

WHY - Let’s ask why are you doing the upgrade? Is the upgrade needed to address an issue or this to say current? Once this has been identified, you can find out what and or if new features you are going to be getting and what value does it bring to the table.

 

Additional questions to ask:

 

  1. Will any new features impact my current environment?
  2. If I am addressing an issue with the upgrade, what is it fixing and are there other workarounds?
  3. Will the upgrade break any customizations that maybe in the environment?
  4. Can the upgrade be deferred?

 

WHO- Once you’ve figured out the “WHAT” you will know the “WHO” that need to be involved. Getting all the key players will help make sure that you have your ducks in a row.

 

  1.     What other teams will you need to have involved?

 

      • Network team
      • Security Team
      • Storage Team
      • Database Team
      • Server Team
      • Application Team
      • Desktop Support
      • Help Desk
      • Project Management Team
      • Testing and certification Team
      • Communications team to inform end users
      • Any other key players – external business partners if your systems are integrated

 

In certain cases, you may need even need a technology partner to help you do the upgrade. This can get complicated as you will need to determine who is responsible for each part of the upgrade. Having a partner do the upgrade is convenient as they can assume the overall responsibility of the success of the upgrade and you can watch and learn from them. Partners can bring value as they are often “experts” and have done these upgrades before and should know the pitfalls and what to watch out for. If you are using a partner, I would recommend you do your own research in addition to the guidance and support provided by the partner because sometimes the ball can be dropped on their end as well. Keep in mind they are humans and may not know all about a particular application, especially it’s very new.

 

WHEN- When are you planning to do the upgrade? Most enterprises do not like disruptions so you will need to determine if this must be done on the weekends or can you do the upgrade without impacting users in production during the weekday.

 

The timing of your upgrade can impact other activities that maybe going on in your network. For example, you probably do not want to be doing an application upgrade like Skype for Business or Exchange the same weekend the Network team is replacing/upgrading the network switches. This could have you barking up trees when there isn’t really need to be.

 


WHERE– This may seem like an easy question to answer but depending on what your upgrading you may need to make certain arrangements. Let’s say your replacing hardware in the datacenter, you will certainly need someone in the datacenter to be able to perform the switch out. If your datacenter is hosted, perhaps you will need hands on tech to perform a reboot of the physical servers in the event the remote reboot doesn’t work.

 

I’ve been in situations where the reboot button doesn’t work and the power cord of the server had to be pulled to reboot the server back online, this involved getting someone on in the datacenter to do that. Depending on your setup and processes this may require you to put support tickets in advance and coordination with the hosting datacenter hosting team. Who wants to sit waiting around for several hours to have a server rebooted just to progress to the next step in an upgrade?

 

 

HOW - How isn't really a W but it is an important step. Sometimes the HOW can be answered by the WHAT, but sometimes it can't so you must ask "HOW will this get upgraded?". Documenting the exact steps to complete the upgrade, whether it's in place or parallel environment will help you identify any potential issues or key steps that maybe missing from the plan. Once you have the steps outline in detail it's good to do a walk through of the plan with all involved parties so all the expectations are clear and set. This is also helps prevent any scope creep that could appear along the way. Having a documented detailed step plan will also help during the actual upgrade in event something goes wrong and you need to do troubleshooting.

 

Proper Planning keeps the headaches at bay…

 

It would seem common sense and almost a standard to answer the 5 W’s when doing upgrades but you would be surprised but how often how many questions are not asked. Too often we get comfortable in our roles and overlook the simple things and make assumptions. Assumptions can lead to tears and headaches if they cause a snag in your upgrade. However, lots of ibuprofen can be avoided if we plan as best as can and go back to the basics of asking the 6 W’s for information gathering.

Home for a week after the PASS Summit before heading back out to Seattle on Sunday for the Microsoft MVP Summit. It's the one week a year where I get to attend an event as an attendee and not working a booth or helping to manage the event. That means, for a few days I get to put my learn on and immerse myself in all the new stuff coming to the Microsoft Data Platform.

 

As usual, here's a bunch of links I found on the Intertubz that you might find interesting, enjoy!

 

AtomBombing: The Windows Vulnerability that Cannot be Patched

I've been digging around for a day or so on this new threat and from what I can tell it is nothing new. This is how malware works, and the user still needs to allow for the code to have access in the first place (click a link, etc.). I can't imagine who among us falls for such attacks.

 

This is the email that hacked Hillary Clinton’s campaign chief

Then again, maybe I can imagine the people that fall for such attacks.

 

Apple's desensitisation of the human race to fundamental security practices

And Apple isn't doing us any security favors, either.

 

Mirai Malware Is Still Launching DDoS Attacks

Just in case you thought this had gone away for some reason.

 

Earth Temperature Timeline

A nice way of looking at how Earth temperatures have fluctuated throughout time.

 

Surface Studio zones in on Mac's design territory

We now live in a world where Microsoft hardware costs more than Apple hardware. Oh, and it's arguably better, too, considering the Surface still has the escape and function keys.

 

Swarm of Origami Robots Can Self Assemble Out of a Single Sheet

Am I the only one that's a bit creeped out by this? To me this seems to be getting close to having machines think, and work together, and I think we know how that story ends.

 

Management in ten tweets

Beautiful in its simplicity, these tweets could serve as management 101 for many.

 

I wasn't going to let anyone use the SQL Sofa at PASS last week until I had a chance to test it first for, um, safety reasons.

couch - 1.jpg

It is a good time to remember that improving agency IT security should be yearlong endeavor. Before gearing up to move forward with implementing new fiscal year 2017 IT initiatives, it is a best practice to conduct a security audit to establish a baseline and serve as a comparison to start thinking about how the agency’s infrastructure and applications should change, and what impact that will have on IT security throughout the year.

 

Additionally, security strategies, plans and tactics must be established and shared so that IT security teams are on the same page for the defensive endeavor.

 

Unique Security Considerations for the Defense Department

 

Defense Department policy requires agencies follow NIST RMF to secure information technology that receives, processes, stores, displays, or transmits DOD information. I’m not going to detail the six-step process—suffice it to say, agencies must implement needed security controls, then assess whether they were implemented correctly and monitor effectiveness to improve security.

 

That brings us back to the security audit: A great way to assess and monitor security measures.

 

Improving Security is a Year-Round Endeavor

 

The DOD has a complex and evolving infrastructure that can make it tricky to detect abnormal activities and ensure something isn’t a threat, while also not prohibiting legitimate traffic. Tools such as security information and event management platforms automate some of the monitoring to lessen the burden.

 

The tools should automate the collection of data and analyze it for compliance, long after audits have been completed.

 

It should also be easy to demonstrate compliance using automated tools. Automated tools should help to quickly prove compliance, and if the tools come with DISA STIGs and NIST FISMA compliance reports, that’s another huge time-saver.

 

Performance monitoring tools also improve security posture by identifying potential threats based on anomalies. Network, application, firewall and systems performance management and monitoring tools with algorithms that highlight potential threats effectively ensure compliance and security on an ongoing basis.

 

Five additional best practices help ensure compliance and overall secure infrastructure throughout the year:

 

  • Remove the need to be personally identifiable information (PII) compliant, unless it’s absolutely critical. For example, don’t store stakeholder PII unless required by agencies processes. Not storing the data mitigates responsibility risks for securing it.

 

  • Remove stored sensitive information that isn’t needed. Understand precisely what and how data is stored and ensure what is kept is encrypted, making it useless to attackers.

 

  • Improve network segmentation. Splitting the network into discrete “zones” boosts performance and improves security, a win-win. The more a network is segmented, the easier it will be to improve compliance and security.

 

  • Eliminate passwords. Think about all the systems and applications that fall within an audit zone, and double check proper password use. Better yet, eliminate passwords and implement smart cards, recognized as an industry best practice.

 

  • Build a relationship with the audit team. A close relationship with the audit team ensures they can be relied upon for best practices and other recommendations.

 

  Find the full article on Signal.

Compliance, as it applies to IT departments, involves following rules and regulations that are meant to protect sensitive data of all types. It can govern everything from IT decision-making and purchasing, to configurations, and the policies and procedures a company must create and enforce to uphold this important task.

 

Rightfully so, many businesses are taking the obligation of compliance very seriously. After all, there is a lot at stake when fines and penalties can be levied against you (among other legal repercussions) for noncompliance.

 

As an IT pro, it’s important to know you what you’re up against. Answer these questions from our recent Compliance IQ Quiz – or verify your IQ from your earlier exam – to see how your knowledge stacks up when it comes to IT compliance.

 

Despite InfoSec folklore, the actors most often involved in a breach of sensitive information are coming from outside your company. Unfortunately, understanding the source of these threats is only half the battle when it comes to maintaining IT security and compliance.

 

1.) Which of the following three types of cyberattacks can be classified as an external threat?

 

A) Technical attacks

B) Phishing attacks

C) Physical attacks

D) All of the above

 

HINT: Watch our "IT Security Threats and Risk Management” video (15:47 - 34:35). Click here.

 

Answer: D) All of the above

 

It is true that most threats to your data and compliance initiatives come from beyond the four walls of your organization, but that doesn’t mean your fellow employees can't somehow be involved.

 

2.) Which of the following exploits is classified as "a form of social engineering in which a message, typically an email, with a malicious attachment or link is sent to a victim with the intent of tricking the recipient to open an attachment."

 

A) Pre-texting

B) Baiting

C) Phishing

D) Elicitation

 

HINT: Would you know if your network was breached? Read this article on solarwinds.com.

 

Answer: C) Phishing

 

If your business interacts with sensitive data that falls under the protection of HIPAA, PCI, NCUA, SOX, GLBA, or other frameworks, then compliance should be on your radar.

 

3.) Poll: Which of the following industries does your business serve? (Select all that apply)

 

A) Financial services

B) Healthcare

C) Technology

D) Federal

E) Education

F) Other

 

See who participated in the quiz in this chart, below:

solarwinds-compliance-iq-quiz.png

 

No locale, industry, or organization is bulletproof when it comes to the compromise of data, even with a multitude of compliance frameworks governing the methods used to prevent unlawful use or disclosure of sensitive data.

 

4.) In the past year, which industry experienced the highest number of breaches of sensitive information? For reference, we have highlighted the key compliance frameworks that guide these industries.

 

A) Financial services - PCI DSS, NCUA, SOX GLBA, and more

B) Healthcare - HIPAA

C) Technology - ISO, COBIT, and more

D) Federal - FISMA, NERC CIP, GPG 13, and more

E) Education - FERPA F) Other

 

HINT: Check out Verizon’s 2016 Data Breach Investigation Report. Click here.

 

Answer: A) Financial services

 

If your business must comply with a major IT regulatory framework or scheme, you may be subject to serious penalties for noncompliance.

 

5.) Not adhering to a compliance program can have severe consequences, especially when breaches are involved. Which of the following can result from noncompliance?

 

A) Withdrawal or suspension of a business-critical service

B) Externally defined remediation programs

C) Fines

D) Criminal liability

E) All of the above

 

HINT: Read this Geek Speak post titled The Cost of Noncompliance. Think big picture and across all frameworks.

 

Answer: E) IT compliance violations are punishable by all of these means, and more.

 

The cost of a breach goes well beyond the fines and penalties levied by enforcement agencies. It also includes the cost of detecting the root cause of a breach, remediating it, and notifying those affected. There are also legal expenditures, business-related expenses, and loss of revenue by damaged brand reputation to take into account, as well.

 

6.) True or False: The price that businesses pay for sensitive data breaches is on the rise globally.

 

HINT: You do the math!

 

Answer: True. According to the Ponemon Institute, the cost associated with a data breach has risen year over year to a current $4 million.

 

Healthcare is increasingly targeted by cyberattacks, including a spree of high-profile breaches and increased enforcement efforts from the OCR over the past few years.

 

7.) What type of data are hackers after if your business is in the healthcare industry?

 

A) CD or CHD

B) ePHI

C) PII

D) IP

 

HINT: Read this post: Top 3 Reasons Why Compliance Audits Are Required.

 

Answer: B) ePHI - Electronic Protected Health Information

 

Other Definitions: CD/CHD - Cardholder Data; ePHI - Electronic Protected Health Information; PII - Personally Identifiable Information; IP - Intellectual Property

 

Despite the higher black market value of healthcare data, 2016 saw a greater volume of compromised PCI data. This makes it all the more important to understand this framework.

 

8.) Which response most accurately describes PCI DSS compliance?

 

A) The organization can guarantee that credit card data will never be lost

B) The organization has followed the rules set forth in the Payment Card Industry Data Security Standards, and can offer proof in the form of documentation

C) The organization is not liable if credit card data is lost or stolen

D) The organization does not store PAN or CVV data under any circumstances

 

HINT: Check out this article: Best Practices for Compliance.

 

Answer: B) The organization has followed the rules set forth in the Payment Card Industry Data Security Standards, and can offer proof in the form of documentation.

 

According to the Verizon 2016 Data Breach Investigation Report, 89% of breaches had a financial or espionage motive. With a long history of unified compliance efforts, the banking industry certainly takes this seriously, and so should you.

 

9.) True or False: The Federal Financial Institute of Examiners Council (FFIEC) is empowered to prescribe uniform principles, standards, and report forms to promote uniformity in the supervision of financial institutions.

 

HINT: See footnote #3 from the The Cost of Noncompliance.

 

Answer: True.

 

Though your aim as an IT pro may be to get compliance auditors off your back, the cybersecurity threat landscape is constantly changing.

 

10.)  True or False: If your organization passed its first compliance audit, that means its network is secure.

 

HINT: Watch our Becoming and Staying Compliant video (10:06- 11:09). Click here.

 

Answer: False. Continuous IT compliance is key to meeting and maintaining regulatory requirements long-term.

 

Any feedback on this quiz or burning questions come as a result? Share a comment, we’d love hear your thoughts.

Today’s posting steps out of my core skills a bit. I’ve always been an infrastructure guy. I build systems. I help my customers with storage, infrastructure, networking, and all sorts of solutions to solve problems in the data center, cloud, etc. I love this “Puzzle” world. However, there’s an entire category of IT that I’ve assisted for decades, but never really have I been a part of. Developers, as far as I’m concerned are magicians. How do they start with an idea and simply by using “Code” that they generate, in a framework, language, etc.? I simply don’t know. What I do know, though, is that the applications that they’re building are undergoing systemic change too. The difference between where we’ve been, and where we’re going is due to the need for speed, responsiveness, and agility in the modifications of the code.

 

In the traditional world of monolithic apps, these wizards needed to put features into applications by adjunct applications, or learn the code on which some “Off the Shelf” software was written in order to make any changes. Moving forward, the Microservices model, that is: code fragments each purpose built to either add functionality, or streamline existing function.

 

Companies like Amazon’s Lambda and Iron.IO, Microsoft Azure, (and soon to be Google) have upped the ante. While I feel that the term “Serverless” is an inaccuracy, as workloads will always need somewhere to run. There must be some form of compute element, but by abstracting that even further from the hardware (albeit virtual hoardware), we are relying less on where or what these workloads depend on. Particularly when you’re a developer, worrying about infrastructure is something you simply do not want to do.

 

Let’s start with a conceptual definition:  According to Wikipedia, “Also known as Function as a Service (FaaS) is a cloud computing code execution model in which the cloud provider fully manages starting and stopping virtual machines as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machines, per hour.” Really, this is not as “Jargon-y” as it sounds. You, the developer, are relying on an amount of processor/memory/storage, but you don’t have any reliance on persistence, or particular location. I feel it’s designed to be a temporary sandbox for your code.

 

In many ways, it’s an approach alternate to traditional IT infrastructure modes of requesting virtual machine(s) from your team, waiting for them, and then if you’ve made some code errors, once again waiting for those machines to be re-imaged. Instead, you load up a temporary amount of horsepower, load it with a copy of your code, then destroy it.

 

In the DevOps world, the beauty of this kind of ephemeral environment is very beneficial. Functionally, the building of code in a DevOps world means that your application is made up of many small pieces of code, rather than a single monolithic application. We have adopted the term “MicroServices” to apply to these fragments of code. Agility and flexibility in these small pieces of code is critical. In fact, the whole concept of DevOps is all about agility. In DevOps, as opposed to traditional code development, rollouts of code changes, updates, patching, etc. can take place with a level of immediacy, whereas full application type rollouts require massive planning. In this case, a rollout could potentially take place instantaneously, as these pieces tend to be essentially tiny. Potentially, code updates can take place and/or be rolled back in minutes.

 

While it is completely certain that DevOps, and the agility that it portends will change the dynamic in Information technology, particularly in companies that rely on home-grown code, what is truly uncertain is whether the concepts of serverless computing will ever grow beyond the Test/Dev world and enter into production environments. I find it difficult, and maybe due to my own lack of coding skills, to envision deploying workloads in a somewhat piecemeal approach. While putting pieces together inside a single workload feels to me far more approachable.

A Never Ending IT Journey around Optimizing, Automating and Reporting on Your Virtual Data Center

1609_VM_Ebook_photo_1.jpg

Reporting

IT reporting at its best is pure art backed by pure science and logic. It is storytelling with charts, figures, and infographics. The intended audience should be able to grasp key information quickly. In other words, keep it stupid simple. Those of you following this series and my 2016 IT resolutions know that I’ve been beating the “keep it stupid simple” theme pretty hard. This is because endless decision-making across complex systems can lead to second-guessing, and we don’t want that. Successful reporting takes the guesswork out of the equation by framing the problem and solution in a simple, easily consumable way.

 

The most important aspect of reporting is knowing your target audience and creating the report just for them. Next, define the decision that needs to be made. Make the report pivot on that focal point, because a decision will be made based on your report. Finally, construct the reporting process in a way that will be consistent and repeatable.

  • excerpted from Skillz To Master Your Virtual Universe SOAR Framework

 

Reporting in the virtual data center details the journey of the virtualization professional in the virtual data center. The story will start with details of virtual data center and key performance indicators. It will evolve into a journey of how to get what is needed to expand the delivery capabilities of the virtual data center. With agility, availability and scalability at the heart of the virtual data center show, reporting is the justification for optimization and automation success.

 

Audience and context matters

Reporting is an IT skill that provides the necessary context for decision-makers to make their singular decision. The key aspects of reporting are the audience and the context. You need to know who the audience is and that will guide an IT pro on the context i.e. the data, the data analysis and the data visualization required in the report. To adeptly report, an IT professional needs to answer the following questions: for whom is the report intended and what things need to be included for a decision?

 

Reporting molds data and events into a summary highlighting key truths for decision makers to make quick, sound decisions. It is neither glamorous nor adrenaline-pumping but it shows IT mastery in its last, evolved form - a means to an end

 

This post is a shortened version of the eventual eBook chapter. Stay tuned for the expanded version in the eBook.

Once upon a time, a small but growing geeky community found a way to instantly communicate with each other using computers. Internet Relay Chat (IRC) and later iterations like ICQ, were the baby steps towards today’s social media platforms, connecting like-minded people into a stream of consciousness. It didn’t take long before some smart developer types coded some response bots into this chat. IRC chat bots provided an interface for finding out answers without a human needing to be at the other end.

 

All type of bots

Today, some social media platforms have embraced bots more than others. Slack’s library of bots is extensive, ranging from witty responses to useful business enablers that highlight information or action requests. Twitter’s bots tend to fall more into the humor category or the downright annoying auto follow/unfollow bots. At the useful end of the scale is Dear Assistant, which is a search-style bot for answering your questions. My personal favorite though is the bot that tweets in real-time from different passengers & crew on the anniversary of the Titanic voyage each year.

 

You can see the difference between bots for automating the dissemination of information, bots that provide a reactive response to input, and bots that connect to and use another service before delivering their response. Our acceptance of this method of interaction and communication is growing, though I wouldn’t say it’s totally commonplace in the consumer market just yet. People still tend to prefer to chat (even online) with other people instead of bots, when looking for an answer to a problem that they can’t already find with a web search.

 

Along comes Voice

The next step in this evolution was voice controlled bots as speech recognition technology improved and became commonplace. Siri, Cortana, Alexa, Google Home … all provide that ‘speak to me’ style bot interaction in an affordable wrapper. It doesn’t feel like justice to call these ‘assistants’ a bot though, even with an underlying ‘accept command and respond’ service. Today’s voice controlled assistants must meld the complexity of speech recognition with phrase analysis, to deliver a quality answer to a question that could be phrased many different ways. Adoption of voice control varies, with some people totally hooked on speaking commands and requests into their devices, while some reserving it for moments of fun and beat-boxing.

 

If bots have been around for so long, why are they only becoming more mainstream now?

 

I think bots are an example of a technology that was before it’s time. It took more widespread adoption and acceptance of social media before we reached the critical mass where enough people were on those platforms to make bots worth looking at. Social media also provided an easy input channel to business and brands, who now have a problem that bots can solve – the automation of inquiries without the cost of human head count 24 x 7. We’ve also created a problem because of the large amount of cost-effective Cloud services. It’s not uncommon for people to use different Internet based services for different tasks, leading to a need for connectivity. Systems like If This Then That and Microsoft Flow are helping to solve that problem. Bots also help connect our services to bring notifications or input points into a more centralized location, so we don’t have to bounce around as much with information siloed in individual services. That’s important in a time-poor, information overwhelmed society.

 

The rise of the bots reminds me of the acceptance of instant messaging and presence awareness. Back in the old days, Lotus Sametime had the capability to show if someone was online from within an email and provide instant individual and group messaging. While it was an Enterprise tool (no Sametime as a Service, free or subscription based), we still had a hard time in the late 90s convincing people of the business benefit. Surely they’ll just all gossip with each other and not get any work done? You have less of a challenge these days convincing a business about Skype or Skype for Business when portions of the workforce (especially the younger employees) have Facebook, Slack and Twitter as part of their lives.  In a world where some prefer text-based chat to phone calls, instant messaging in the workplace is not that big of an adoption leap. I think bots now have a similar, easier path to adoption, though they still have a way to go.

 

Why is this important?

Over the next few weeks I’m going to delve more into this emerging technology, along with machine learning and artificial intelligence. I’m interested to hear if you think it’s all hype, or if we need to embrace as the next big, life changing thing. As much of a geek as I am, I’ve seen concepts fail when they are a technology looking for a problem to solve, rather than the other way around. If bots, AI and machine learning are the future, what will that look like for us as consumers and as IT Professionals?

IMG_2055.JPG

 

Automation is the future. Automation is coming. It will eliminate all of our jobs and make us obsolete. Or at least that is the story which is often being told.  Isn’t it true though?

I mean, who remembers these vending machines of the future which were set to eliminate the need for cooks, chefs, cafeterias!   That remarkably sounds just like the same tools we’re using on a regular basis built, designed and streamlined to make us all unnecessary to the business! And then Profit, right?

 

Well, if Automation isn’t intended to make eliminate us, what IS it for?  Some might say that automation is to make the things we’re already doing today easier and make us better at doing our jobs.  That can be true to a point.  Some might also say that automation is taking things that we cannot do today and making it possible so we can be better at doing our jobs. That can also be true to a point.

 

How many of you recall in over the course of your networking operations and management lives, Long before Wireshark and ethereal, having to bring online or hire in a company to help troubleshoot a problem with a “sniffer laptop”.   It wasn’t anything special, and something we all likely take for granted today, yet it was a specialized piece of equipment with specialized software which enabled us to gain insight into our environment, to dig into the weeds to see what is going on!

Screen Shot 2016-10-26 at 10.12.59 PM.png

 

These days though with Network Monitoring tools, SNMP, NetFlow, Sflow, Syslog servers and real time data telemetry from our network devices it is not only something which is attainable, it’s downright expected that we should have all of this information and visibility.

 

With the exception of a specialized ‘sniffer’ engineer, I don’t see that automation having eliminated people, It only made us all more valuable and yet the expectation of what we’re able to derive out of the network has only grown.   This kind of data has made us more intelligent, but it hasn’t exactly made us smarter. The ability to read the Rosetta stone of networking information and interpret it is what has separated the engineers from the administrators in some organizations.   Often times the use of tools has been the key to taking that data and not only making it readable, but also making it actionable. 

 

Automation can rear its beautiful or ugly head in many different incarnations in the business, from making deployment of workstations or servers easier than we had been in the past with software suites, tools or scripting.  To, taking a dated analogy eliminate the need for manual switch board operators at Telcos by replacing them with Switching Stations which automatically transfer calls based upon the characteristics of dialing.  But contrasting the making something we were already doing today and making it better, to the something we were already doing with people and eliminating them.   Until this latest generation and thanks to technology and automation credit companies are able to generate ‘one-time CC #’ which can also be tied back to a very specific amount of money to withdraw from your credit card account. A capability which was not only unheard of in the past, but it would have been fundamentally impossible to implement let alone police without our current generation of data, analytics and automation abilities.

 

 

As this era of explosive knowledge growth, big data analytics and automation continues, what have you been seeing as the big differentiators to what kind of automation is making your job easier or more possible, which aspects of automation are creating capability which fundamentally didn’t exist before, and which parts of it are eliminating partially or wholly the way we do things today?

To get started with securing your network, you don’t need to begin with a multi-million dollar project with multiple penetration tests, massive network audits, wide-spread operating systems upgrades, and the installation of eye-wateringly expensive security appliances. Those are all great things to do if you have the staff and budget, but just starting with some entry-level basics will provide a huge step forward in securing your network from today’s more common vulnerabilities. These ten practices are relatively easy and quick ways to create the foundation of a robust security program.

 

1. Patching

 

Keeping operating system patches up to date may seem like a no-brainer, but it still seems to fall by the wayside even in large organizations. I use the term “patching” very loosely here because I want to highlight the importance of updating all operating systems, not just Windows.

 

It’s important to set a regular Windows patch schedule and automate it using whatever tools you have available. Whether this is weekly or monthly, the key is that it’s regular and systematic.

 

Also remember all the other operating systems running on the network. This means keeping track of and updating the software running on network devices such as routers, switches, and firewalls, and also Linux-based operating systems commonly used for many servers and specialized end-user use cases. 

 

 

2. Endpoint Virus Protection

 

Not long ago, endpoint virus protection was a bear to run because of how resource intensive it was on the local computer. This is not at all the case anymore, and with how frequently malware sneaks into networks via email and random web-browsing, endpoint protection is an absolutely necessary piece of any meaningful security program.

 

 

3. Policy of Least Privilege

 

Keep in mind that attack vectors aren’t all external to your network. It’s important to keep things secure internally as well. Assigning end-users only the privileges they need to perform their job function is a simple way to provide another layer of protection against malicious or even accidental deletion of files, copying or sending unauthorized data, or accessing prohibited resources.

 

 

4. Centralized Authentication

 

Using individual or shared passwords to access company resources is a recipe for a security breach. For example, rather than use a shared pre-shared-key for company wireless, use 802.1x authentication and a centralized database of users such as Windows Active Directory in order to lock down connectivity and restrict what resources users can access. This can be applied to almost any resource including computers, file shares, and even building access.

 

 

5. Monitoring and Logging

 

Monitoring a network and keeping extensive logs can be very expensive simply because of the cost associated with the hardware and licensing needed to audit and store large amounts of data. However, this may be one area in which it would be a good idea to explore some software options. Most network devices have very few built-in tools for monitoring and logging, so even a basic software solution is still a huge step forward. This is very important for creating network baselines in order to determine anomalous behavior as well as traffic trends needed to right-size network designs. Additionally, having even very basic logs are priceless when investigating a security breach or performing service-desk root cause analysis.

 

 

6. End-user training

 

The only way to completely secure a network is to turn off all the computers and send them to the bottom of the ocean. Until management approves such a policy, end-users will be clicking on links they shouldn’t be clicking on and grabbing files from the file share just before their friendly exit interview. End-user training is a practice in changing culture and security awareness. This is a difficult task for sure, but it’s an inexpensive and non-technical way to strengthen the security posture of an organization. End-user training should include instructions on what red flags to look for in suspicious email and how to report suspicious activity. It should also include training to prevent password sharing and how to use email properly.

 

 

7. Perimeter Security

 

The perimeter of the network is where the local area network meets the public internet, but today that line is very blurred. A shift toward a remote workforce, the use of cloud services, and the movement away from private circuits means that the public internet is almost an extension of the LAN. Traditionally, perimeter firewalls were used to lock down the network edge and stop any malicious attack from the outside. Today, so much necessary business traffic ingresses and egresses the perimeter firewall that it’s important to keep firewall rules up-to-date and maintain a very keen awareness of what services run on the network. For example, a very simple modification for egress filtering is to restrict outbound traffic to any destination on port 25 (Simple Mail Transfer Protocol) to only the email server. This simple firewall change prevents any infected computer from generating outbound mail traffic possibly marking the organization as a spam originator.

 

 

8. Enterprise IoT

 

The Internet of Things may certainly be a buzzword in some peoples’ minds, but many companies have been dealing with a multitude of small, insecure, IP-enabled devices for years. Manufacturing companies often use hand-held barcode scanners, medical facilities use IP-based tracking devices for medical equipment, and large office campuses use IP-based card access readers for doors. These devices aren’t always very secure sometimes utilizing port 80 (unencrypted HTTP) for data transmission. This can be a big hole in an organization’s network security posture. Some organizations have the money and staff to implement custom management systems for these devices, but an entry-level approach to get started could be to simply place all like devices in a VLAN that has very restricted access. Applying the policy of least privilege to a network’s IoT devices is a great first step toward securing the current influx of IP-enabled everything.

 

 

9. Personal Devices

 

End-users’ personal mobile devices, including smartphones and tablets, often outnumber corporate devices on many enterprise networks. It’s important to have a strategy to give folks a pleasant experience using their devices while keeping in mind that these are normally unmanaged and unknown network entities. To start, simply require by policy that all personal smartphones must use the guest wireless. This may ruffle some executive feathers, but really there’s almost no reason for a tiny smartphone to access company resources while on the LAN. Of course there are exceptions, but starting with this type of policy is at least a good company conversation starter to move toward a decent end-user experience without compromising network security.

 

 

 

10. Physical security

 

It may go without saying that a company’s servers, network devices, and other sensitive infrastructure equipment should be behind locked doors, but often this is not the case. Especially in smaller organizations where there may be a culture of trust, entire server rooms are unlocked and accessible to anyone walking by. Physical security can take the form of biometric scanners to enter secure data centers with cameras peering down from overhead, but a simple first step is to lock all the network closets and server room doors. If keys are unaccounted for, locks should be changed. Additionally, disabling network ports not assigned to a workstation, printer, or other verified network device is a good way to prevent guests from plugging in their non-corporate devices into the corporate network.

 

You don’t need to mortgage the farm to start making great strides in your organization’s security posture. These relatively simple and entry-level tasks will prevent most of the attack vectors we see today. Start with the basics to lay the foundation for a strong network security posture.

DanielleH

What's Your IoT Reality?

Posted by DanielleH Administrator Oct 26, 2016

Pop quiz: When was the first instance of an Internet of Things thing?

 

If you guessed 1982, you’re right! In 1982, a Coca-Cola machine at Carnegie Mellon University was modified to connect to the internet and report its inventory and whether newly loaded drinks were cold. Of course, this is up for debate as some might even claim the first internet-connected computer was an IoT thing.

 

These days, IoT things are decidedly more numerous, and overall, IoT is much more complex. In fact, according to Gartner, in just the last two years, the number of IoT devices has surged to 6.4 billion, and by 2020, device population will reach 20.8 billion. With these eye-popping figures, we’re very curious to know how your job has been affected by IoT.

 

Does your organization have an IoT strategy?

 

How many IoT devices do you manage?

 

How have these devices affected your network and the ability to monitor it and/or capacity plan?

 

What are the security challenges you face as a result of these devices?

 

Tell us about your “IoT reality” by answering these questions (and maybe provide an anecdote or two) by November 9 and we’ll give you 250 THWACK points—and maybe a Coke circa 1982 (just kidding)—in return. Please be sure to also let us know a little about your company: industry, size, location (country).

I'm in Seattle this week at the PASS Summit. If you don't know PASS, check out sqlpass.org for more information. This will be my 13th consecutive Summit and I'm as excited as if it was my first. I am looking forward to seeing my #sqlfamily again.

 

Anyway, here's a bunch of stuff I found on the Intertubz in the past week that you might find interesting, enjoy!

 

DDoS attack against DNS provider knocks major sites offline

Yet another reminder that the internet is fragile, and you can't rely on any service provider to be up 100% of the time.

 

10 Modern Software Over-Engineering Mistakes

A bit long but worth the read. I am certain you will identify (painfully at times) with at least 8 or more of the items on this list.

 

What will AI make possible that's impossible today?

Also a bit long, but for someone that enjoys history lessons I found this worth sharing.

 

Go serverless for the enterprise with Microsoft Azure Functions

One of the sessions from Microsoft Ignite I felt worth sharing here. I am intrigued by the 'serverless' future that appears to be on the horizon. I am also intrigued as to how this future will further enable DDoS attacks because basic security seems to be hard for most.

 

Bill Belichick is done using the NFL’s Microsoft Surface tablet he hates so much

On the bright side, the announcers keep calling it an iPad, so there's that.

 

Here's how I handle online abuse

Wonderful post from Troy Hunt on how he handles online abuse. For those of us that put ourselves out there we often get negative feedback from the cesspool of misery known as the internet and Troy shares his techniques.

 

Michael Dell Outlines Framework for IT Dominance

"Work is not a location. You don't go to work, you do work." As good as that line is, they forgot the part about work also being a never ending day.

 

Microsoft allows Brazil to inspect its source code for ‘back doors’

I've never heard of this program before, but my first thought is how would anyone know if Microsoft was showing them everything?

 

I managed to visit some landmarks while in Barcelona last week, here's Legodaughter in front of Sagrada Familia:

lego - 1.jpg

Emails is the center of life for almost every business in this world. When email is down businesses cannot communicate. There is loss of productivity which could lead to dollars lost, which in the end is not good.

Daily monitoring of Exchange involves many aspects of the environment and the surrounding infrastructure.  Simply turning on monitoring will not get you very far. First question you should ask yourself is “What do I need to monitor?” Not knowing what to look out for could inundate you with alerts which is not going to be helpful for you.

One of the first places to look at when troubleshooting mail slowness or other email issues is the network. Therefore, it is a good idea to monitor some basic network counters on the Exchange Servers. These counters will help guide you to determine where the root cause of the issue is.

 

Network Counters

The following tables displays acceptable thresholds and information about common network counters.

 

Counter

Description

Threshold

Network Interface(*)\Packets Outbound Errors

Indicates the number of outbound packets that couldn't be transmitted because of errors.

Should be 0 at all times.

TCPv6\Connection Failures

Shows the number of TCP connections for which the current state is either ESTABLISHED or CLOSE-WAIT. The number of TCP connections that can be established is constrained by the size of the nonpaged pool. When the nonpaged pool is depleted, no new connections can be established.

Not applicable

TCPv4\Connections Reset

Shows the number of times TCP connections have made a direct transition to the CLOSED state from either the ESTABLISHED state or the CLOSE-WAIT state.

An increasing number of resets or a consistently increasing rate of resets can indicate a bandwidth shortage.

TCPv6\Connections Reset

Shows the number of times TCP connections have made a direct transition to the CLOSED state from either the ESTABLISHED state or the CLOSE-WAIT state.

An increasing number of resets or a consistently increasing rate of resets can indicate a bandwidth shortage.

 

 


Monitoring Beyond the Exchange Environment


When your monitoring exchange not only are you monitoring for performance but you also want to monitor outside factors such as network, active directory, and any external connections such as mobile device management. All these external factors will affect the health of your Exchange environment.

In order to run Exchange, you need a network, yes routers and switches can impact exchange. As the exchange admin you don’t need to be aware of every single network event but a simple alert of a network outage or blip can be helpful. Sometimes all it takes is a slight blip in the network and it could have could affect your Exchange DAG by causing databases to fail over.

If you are not responsible for the network, then I would suggest you coordinate with your network team on what notifications you should be made aware in terms of network outages. Some key items to be informed or notified of are:

  • Planned outages between datacenters
  • Planned outages for network equipment
  • Network equipment upgrades and or changes that would affect the subnet your exchange servers reside on
  • Unplanned outages of network equipment and between datacenters
  • If your servers are virtualized, you should be informed of any host changes and/or virtual switch changes
  • Planned or unplanned DNS server changes because DNS issues can be a real nightmare

Preventing Bigger Headaches

Exchange Monitoring is a headache and can be time consuming but if you know what you are looking for and have the right tools in hand it is not so bad.  If the Exchange DAG is properly designed a network blip or outage should not take down your email for you company, this is the whole point of having an Exchange DAG( high availability design). What you may get is a help desk calls when users see that their outlook has disconnected briefly. Being informed of potential network outages can help you prepare in advance if you need to manually switch active copies of databases or when you need to do mailbox migrations. A network that is congested or having outages can cause mailbox migrations to fail, cause outlook clients to disconnect and even impact the speed of email delivery. Knowing ahead of time allows you to be prepared and have less headaches.

 


Forget about writing a letter to your congressman – today citizens use the web, email and social media to make their voices heard on the state, local and federal levels.

 

Much of this participation is due to the ubiquity of mobile devices. People can do just about everything with a smartphone or tablet and they expect their interactions with the government to be just as easy.

 

Unfortunately, according to a January 2015 report by the American Customer Satisfaction Index, citizen satisfaction with federal government services continued to decline in 2014. This, despite Cross-Agency Priority Goals that require federal agencies to “utilize technology to improve the customer experience.”

 

IT pros need to design services that allow users to easily access information and interact with their governments using any type of device. Then, they must monitor these services to ensure they continue to deliver optimal experiences.

 

Those who wish to avoid the ire of the citizenry would do well to add automated end-user monitoring to their IT toolkit. End-user monitoring allows agency IT managers to continuously observe the user experience without having to manually check to see if a website or portal is functioning properly. It can help ensure that applications and sites remain problem-free-- and enhance a government’s relationship with its citizens.

 

There are three types of end-user monitoring solutions IT professionals can use to ensure their services are running at peak performance.

 

First, there is web performance monitoring, which can proactively identify slow or non-performing websites that could hamper the user experience. Automated web performance monitoring tools can also report load-times of page elements so that administrators can adjust and fix pages accordingly.

 

Synthetic end-user monitoring (SEUM) allows IT administrators to run simulated tests on different scenarios to anticipate the outcome of certain events. For example, in the days leading up to an election or critical vote on the Hill, agency IT professionals may wish to test certain applications to ensure they can handle spikes in traffic. Depending on the results, managers can make adjustments to handle the influx.

 

Likewise, SEUM allows for testing of beta applications or sites, so managers can gauge the positive or negative aspects of the user experience before the services go live.

 

Finally, real-time end-user monitoring effectively complements its synthetic partner. It is a passive monitoring process that gathers actual performance data as end users are visiting and interacting with the web application in real time, and it will alert administrators to any sort of anomaly.

 

Using these monitoring techniques, IT teams can address user experience issues from certain locations – helping to ascertain why a response rate from a user in Washington, D.C., might be dramatically different from one in Austin, Texas.

 

Today, governments are trying to become more agile and responsive and are committed to innovation. They’re also looking for ways to better service their customers. The potent combination of synthetic, real-time and web performance monitoring can help them achieve all of these goals by greatly enhancing end-user satisfaction and overall citizen engagement.

 

Find the full article on Government Computer News.

Without a doubt, the biggest trend in IT storage over the past year, and moving forward is the concept of Software Defined Storage (SDS). It’s more than just a buzzword in the industry, but, as I’ve posted before, a true definition has yet to be achieved. I’ve written previously about just the same thought. Here’s what I wrote.

 

Ultimately, I talked about different brands with different approaches and definitions. So, at this point, I’m not going to rehash the details. But at a high level, the value as I see it, has to do with the divesting of the hardware layer from the management plane. In the view I have, the capacities of leveraging the horsepower of commodity hardware, in reference builds, plus a management layer optimized toward that hardware build grants the users/IT organization the costs come down, and potentially, the abilities to customize the hardware choices for the use-case. Typically your choices revolve around Read/Write IOPS, Disc Redundancy, Tiering, Compression, Deduplication, Number of paths to disc, failover, and of course, with the use of X86 architecture, the amount of RAM, and speed of processors in the servers. To compare these to traditional monolithic storage platforms makes for a compelling case.

 

However, there are other approaches. I’ve had conversations with customers who only want to buy a “Premixed/Premeasured” solution. And, while that doesn’t lock out SDS models such as that one above, it does change the game a bit. Toward that end, many storage companies will align with a specific server, and disc model. They’ll build architectures very tightly bound around a hardware stack, even though they’re relying on commodity hardware, and allow the customers to purchase the storage much in the same as more traditional models. They often take it a step further and put a custom bezel on the front of the device. So it may be Dell behind, but it’s “Someone’s Storage” company in the front. After all, the magic all takes place at the software layer, whatever makes the solution unique… so why not?

 

Another category that I feel is truly trending in terms of storage, is really a recent category in backup, dubbed CDM, or Copy Data Management. Typically, these are smaller bricks of storage that act as gateway type devices, holding onto some backup data, but also pointing to the real target, as defined by the lifecycle policy on the data. There are a number of players here. I am thinking specifically of Rubrik, Cohesity, Actifio, and others. As these devices are built on storage bricks, but made functional simply due to superior software, I would also consider them to be valid considerations as Software Defined Storage.

 

Backup and Disaster Recovery are key pain points in the management of an infrastructure. Traditional methods consisted of some level of scripted software moving your data for backup into a tape mechanism (maybe robotic), which would then require quite often manual filing, and then the rotation of tapes to places like Iron Mountain. Restores have been tricky, time spent awaiting restore, and the potential corruption of files upon those tapes has been reasonably consistent. With tools like these, as well as other styles of backup including cloud service providers and even public cloud environments have made things far more viable. These CDM solutions take so much of the leg work out of the process, and as well, enable quite possibly zero Recovery Point and Recovery Time objectives, regardless of where the current file is located, and by that I mean, the CDM device, local storage, or even on some cloud vendor’s storage. It shouldn’t matter, as long as the metadata points to that file.

 

I have been very interested in changes in this space for quite some time, as these are key process changes pushing upward into the data center. I’d be curious to hear your responses and ideas as well. 

joshberman

PCI DSS 3.2 is coming!

Posted by joshberman Employee Oct 24, 2016

PCI DSS 3.1 expires October 31st this year. But don’t panic. If you don’t have a migration plan for 3.2, yet, you have until Feb 1, 2018 before the requirements become mandatory. For most merchants, the changes are not onerous. If you are a service provider, however, there are more substantial changes, some of which are already in effect. In this post we focus on the requirements for merchants, and present a quick overview of the required changes so that you can identify any gaps you may still need to remediate. 

 

The main changes in PCI DSS 3.2 are around authentication, encryption, and other minor clarifications. We will primarily discuss authentication and encryption in this article.

 

Authentication

The PCI council review of data breaches over the years has confirmed that current authentication methods are not sufficient to mitigate the risk of cardholder data breaches. Even though cardholder data is maintained on specific network segments, it is nearly impossible to prevent a data breach if the authentication mechanisms are vulnerable as well. Specifically, the reliance on passwords, challenge questions, etc. only, has been proven to be a weakness in the overall security of cardholder networks. 

 

The new 3.2 requirements now state that multi-factor authentication (MFA) is required for all individual, administrative, non-console, or remote access to the cardholder data environment.[1] The new standard is clear. Multi-factor[2] means at least two of the following:

 

  • Something you know, such as a password or passphrase
  • Something you have, such as a token device or smart card
  • Something you are, such as a biometric

 

Multi-factor does not mean a password and a set of challenge questions, or two passwords.

 

Implementing such multifactor authentication takes time and planning as there are almost 200 different types of multifactor solutions on the market.

Multifactor authentication is also complex because:

  1. It likely needs to integrate with your single sign-on solution
  2. Not all IT systems can support the same types of MFA, especially cloud solutions
  3. MFA resets are more complex (especially if a dongle is lost)
  4. Most MFA solutions require rollout and management consoles on top of your built-in authentication

 

Encryption

In recent years, SSL, the workhorse for securing e-commerce and credit card transactions, has been through the ringer. From shellshock, heartbleed, and poodle to the Man-in-the-Middle AES CBC padding attack reported this May, SSL, openSSL, and all the derivative implementations that have branched from openSSL have been experiencing one high severity vulnerability after another. [SIDEBAR: A list of all vulnerabilities in openSSL can be found here: List of OpenSSL Vulnerabilities, other SSL vulnerabilities can be found on the vendors websites or at the National Vulnerability Database.] Of particular concern are SSL vulnerabilities in Point of Sale solutions and other embedded devices as those systems may be harder to upgrade and patch than general-purpose computers. In response, the PCI council has issued guidance on the use of SSL.[3]

 

The simplest approach to achieve PCI DSS 3.2 compliance is not to use SSL, or early versions of TLS, period. [SIDEBAR TLS  - Transport Layer Security is the name of the IETF[4] RFC that was the follow on to SSL]. In fact, any version of TLS prior to 1.2 and all versions of SSL 1.0-3.0 are considered insufficiently secure to protect cardholder data. As a merchant, you may still use these older versions if you have a plan in place to migrate by June 2018, and a risk mitigation plan to compensate for the risks of using such older versions. Risk mitigation plans might include, for example, additional breach detection controls, or documented patches for all known vulnerabilities, or carefully selecting cipher suites to ensure no weak ciphers are permitted, or all of the above. If you have a Point of Sale or Point of Interaction system that does not have any known vulnerabilities, you may run these systems using older versions of SSL/TLS. The PCI council reserves the right to change guidance if new vulnerabilities associated with a particular POS or POI become known which jeopardize the cardholder environment.

 

If you are concerned about the risk to your e-commerce or mobile environment with upgrading your SSL to TLS 1.2 or higher, you can ask your online marketing department what the oldest versions of iOS, Android, Windows® and Mac® are that connect to your systems. Android has supported TLS 1.2 since version 4.1, although it is not enabled by default. As of version 4.4, Kitkat, released Oct. 2013, TLS 1.2 has been enabled by default. iOS 5, has supported TLS 1.2 since Oct. 2011. Windows 8 is the first Microsoft OS to support TLS 1.2 by default. Prior to that, you needed to manually configure TLS 1.2 support. A complete TLS table for Windows is available here: TLS version support. For Mac OS® you need to reach Mavericks, (Oct. 2013) to find a Mac computer with TLS 1.2 enabled by default.  

 

If all this versioning seems daunting, not to worry. Most modern browsers, which auto- upgrade, have been supporting TLS 1.2 for a long time.[5] The net/net is most organizations have nothing to fear with upgrading their secure communications layer to TLS 1.2.

 

Minor Clarifications

There are some additional nuances regarding the correct cipher suites to use to meet PCI DSS 3.2 requirements, which we will cover in a future post on using managed file transfer in cardholder environments.

 

If you are a PCI pro, you now have a good overview of the major changes coming in PCI DSS 3.2. Note that we didn’t address the additional changes a service provider needs to comply with, nor did we walk though all 12 requirements of PCI DSS and how they apply to IT organizations. For some of that, you should check out the Simplifying PCI Compliance for IT Professionals White Paper.

 

Questions about PCI DSS?

If you have questions regarding the latest version of PCI DSS, or what’s generally required from an IT perspective to maintain compliance, we urge you to join in the conversation on THWACK!

 


[1] Requirement 8.3

[2] Requirement 8.2

[3] PCI DSS 3.2 Appendix 2

[4] Internet Engineering Task Force RFC 2246, et. seq https://www.ietf.org/rfc/rfc2246.txt

[5] https://en.wikipedia.org/wiki/Template:TLS/SSL_support_history_of_web_browsers

Filter Blog

By date:
By tag: