Skip navigation
1 6 7 8 9 10 Previous Next

Geek Speak

2,051 posts

Hack your life with bots

Posted by scuff Nov 9, 2016

Having recently dropped Cortana for Siri (I’m still mourning that, but I have apps!), I must admit I haven’t integrated either into my life as a habit. I’m a writer at heart, so it still feels more natural for me to type and tap rather than talk to get things done.  The next generation have never known a phone without a voice activated assistant though. To them, it’s not a bot, it’s a productivity tool.


The voiceless bots have their place though. Chatbots on web help us find information while feeling like there’s a human replying back. Services like purely process data and plan meals without us having to make all the decisions.


With Amazon’s Alexa and Google Home, our voice activated servants are growing. Except in Australia where you can’t buy either. You can get creative acquiring one though and setting Alexa to a location of Guam, which shares the same time zone.

On their own, these bots are glorified searchers and data entry units – “What time is the game on TV?” “Set a reminder for tomorrow to send James a birthday card.”


Hook them up to connected things in your home and you are entering home automation territory. Combine a Raspberry Pi with Siri for an Apple-esque voice controlled home.

Use Alexa to dim your Philips Hue lights or turn on your Belkin WeMo switch. She will also place your Amazon orders (again, outside of Australia) unless you disable that for the safety of your credit card balance. You can also add Alexa to your If This Then That recipes for access to other online services.


As consumer Cloud services have grown, people are more driven by functionality than by brand loyalty. It doesn’t cost much (sometime it’s even free) to have multiple different service subscriptions, so you’ll find people using Google Calendar and Dropbox instead of Google Drive. Connectivity of these services is another battleground, with bots like IFTTT and Zapier automating routine tasks between disconnected services or even automating tasks in the same product set.


Microsoft recently entered the market, well part of it anyway, with Flow. I say part of it because Microsoft’s small list of connective services are heavily weighted towards business & Enterprise use and less at personal apps. I’m watching to see if that changes. Microsoft has extended this type of service with advanced conditions in its connectors and the ability to add gateways to on-premises data.


Behind all of this, the bots are collecting and processing data. We are giving them information to feed off, that they will hopefully keep private and only use to improve their services (and sometimes their recommendations). Even in our personal lives, we’re connected to the Big Data in the Cloud. Don’t think about that for too long or we’ll get into Skynet territory. That’s another article coming soon.


But it is another driver behind Digital Transformation and what companies are now dealing with. All of these electronic records of usage and purchases are driving how companies create, refine and supply products and services to their customers. All of that has an impact on ultimately what the I.T. requirements are for an organisation.


Do the bots concern you or have you automated your life outside of the office? Let me know in the comments.



In Redmond this week for the annual Microsoft MVP Summit. I enjoy spending time learning new things, meeting fellow MVPs, and exchanging ideas directly with the data platform product teams. Being able to provide some influence on future product direction, as well as features, is the highlight of my career right now. It's a wonderful feeling to see your suggestions reflected in the products and services over time.


Anyway, here's a bunch of links I found on the Intertubz that you might find interesting, enjoy!


Banks Look Up to the Cloud as Computer Security Concerns Recede

For those of you still holding out hope that the Cloud isn't a real thing to be taken seriously, this is the asteroid coming to wipe you and the rest of the dinosaurs out.


Minecraft: Education Edition officially launches

Looks like we've found a way to get kids to stop playing Minecraft: we've turned it into work.


Dongle Me This: Why Am I Still Buying Apple Products?

Because they are simple and (usually) just work. But yeah, the dongles are piling up in this house as well.


Two Centuries of Population, Animated

You know I love data visualizations, right? Well, here's another.


Practical Frameworks for Beating Burnout

Long but worth the read, and do your best to consume the message which is "do less, but do it well".


There is no technology industry

It's never about the tech, it's *always* about the data.


This ransomware is now one of the three most common malware threats

Excuse me for a minute while I take backups of everything I own and store them in three different locations.


Next week I am expecting to see a lot of screens like this one, installing SQL Server on Linux:

sql-linux - 1.jpg

A network monitor uses many mechanisms to get data about the status of its devices and interconnected links. In some cases, the monitoring station and its agents collect data from devices. In others, the devices themselves report information to the station and agents. Our monitoring strategy should use the right means to get the information we need with the least impact to the network's performance. Let's have a look at the tools available to us.

Pull Methods

We'll start with pull methods, where the network monitoring station and agents query devices for relevant information.


SNMP, the Simple Network Management Protocol has been at the core of query-based network monitoring since the early 1990s. It began with a simple method for accessing a standard body of information called a Management Information Base or MIB. Most equipment and software vendors have embraced and extended this body to varying degrees of consistency.

Further revisions (SNMPv2c, SNMPv2u and SNMPv3) came along in the 19d early 2000s. These respectively added some bulk information retrieval functionality and improved privacy and security.

SNMP information is usually polled from the monitoring station at five-minute intervals. This allows the station to compile trend data for key device metrics. Some of these metrics will be central to device operation: CPU use, free memory, uptime, &c. Others will deal with the connected links: bandwidth usage, link errors and status, queue overflows, &c.

We need to be careful when setting up SNMP queries. Many networking devices don't have a lot of spare processor cycles for handling SNMP queries, so we should minimize the frequency and volume of retrieved information. Otherwise, we risk impact to network performance just by our active monitoring.

Query Scripts

SNMP is an older technology and the information that we can retrieve can be limited. When we need to get information that isn't available through a query, we need to resort to other options. Often, script access to the device's command-line interface (CLI) is the simplest method. Utilities like expect or scripting languages like python or go will allow information to be extracted by filtering CLI output to extract necessary data.

Like SNMP, we need to be careful about taxing the devices we're querying. CLI output is usually much more detailed than an SNMP query and requires more work on the part of the device to produce it.

Push Methods

Push methods are the second group of information gathering techniques. With these, the device is sending the information to the monitoring station or its agents without first being asked.


SNMP has a basic push model where the device sends urgent information to the monitoring station and/or agents as events occur. These SNMP traps cover most changes in most categories that we want to know about right away. For the most part, they trigger on fixed occurrences: interface up/down, routing protocol peer connection status, device reboot, &c.


RMON, Remote Network MONitoring was developed as an extension to SNMP. It puts more focus on the information flowing across the device than on the device itself and is most often used to define conditions under which an SNMP trap should be sent. Where SNMP will send a trap when a given event occurs, RMON can have more specific triggers based on more detailed information. If we're concerned about CPU spikes, for example, we can have RMON send a trap when CPU usage goes up too quickly.


Most devices will send a log stream to a remote server for archiving and analysis. By tuning what is sent at the device level, operational details can be relayed to the monitoring station in near real time. The trick is to keep this information filtered at the transmitting device so that only the relevant information is sent.

Device Scripting

Some devices, particularly the Linux-based ones, can run scripts locally and send the output to the monitoring server via syslog or SNMP traps. Cisco devices, for example can use the Tool Command Language (TCL) or Embedded Event Manager (EEM) applets to provide this function.

Your Angle

Which technologies are you considering for your network monitoring strategies? Are you sticking with the old tried and true methods and nothing more, or are you giving some thought to something new and exciting?

Evolution is not just for science conversations; it is a critical aspect of effective IT management. As federal technology environments become more complex, the processes and practices used to monitor those environments must evolve to stay ahead of -- and mitigate -- potential risks and challenges.


Network monitoring is one of the core IT management processes that demands growth in order to be effective. In fact, there are five characteristics of advanced network monitoring that signal a forward-looking, sophisticated solution:


  1. Dependency-aware network monitoring
  2. Intelligent alerting systems
  3. Capacity forecasting
  4. Dynamic network mapping
  5. Application-aware network performance


If you’ve implemented all of these, you have a highly evolved network.  If you have not, it might be time to start thinking about catching up.


1. Dependency-aware network monitoring


A sophisticated network monitoring system provides all dependency information: not only which devices are connected to each other, but also network topology, device dependencies and routing protocols. This type of solution then takes that dependency information and builds a theoretical picture of the health of your agency’s network to help you effectively prioritize network alerts.


2. Intelligent alerting system


The key to implementing an advanced network monitoring solution is having an intelligent alerting system that triggers alerts on deviation from normal performance based on dynamic baselines calculated from historical data. And an alerting system that understands the dependencies among devices can significantly reduce the number of alerts being escalated. This supports “tuned” alerts so that admins get only one ticket when there is a storm of similar events.


3. Capacity forecasting


An agencywide view of utilization for key metrics, including bandwidth, disk space, CPU and RAM, plays two very important roles in capacity forecasting:


     1. No surprises. You must know what’s “normal” at your agency to understand when things are not normal. You can see trends over time and can be prepared in advance for changes that are happening on your network.


     2. Having the ability to forecast capacity requirements months in advance will give you the opportunity to initiate the procurement process in advance of when the capacity is needed.


4. Dynamic network mapping


Dynamic network mapping allows you to display the data on how devices are connected on your network on a single screen, with interactive, dynamic maps that can display link utilization, device performance metrics, and automated geolocation. This way, you can see how everything is connected and find the source of the slowdown.


5. Application-aware network performance


Application-aware network performance monitoring collects information on individual applications as well as network data and correlates the two to determine what is causing an issue. You can see if it is the application itself causing the issue or if there is a problem on the network.


Evolving your network monitoring solution will help keep you ahead of the technology curve and help meet budget challenges by providing more in-depth information to ensure your monitoring is proactive and strategic.


Find the full article on Government Computer News.

Amongst the key trending technologies moving forward in the enterprise and data center space is that of the virtualization of the network layer. Seems a little ephemeral in concept, right? So, I’ll explain my experience with it, its benefits, and limitations.


First, what is it?

NFV (Network Functions Virtualization) is intended to ease the requirements placed on physical switch layers. Essentially, the software for the switch environment sits on the servers rather than on the switches themselves. Historically, when implementing a series of physical switches, an engineer must use the language of the switch’s operating system, to create an environment in which traffic goes where it is supposed to, and doesn’t where it shouldn’t. VLans, routing tables, port groups, etc. are all part of these command sets. These operating systems have historically been command line, arcane, and quite often discretely difficult to reproduce. A big issue is that the potential for human error, while not disparaging the skills of the network engineer, can be quite high. It’s also quite time-consuming. But, when it’s right, it simply works.


Now, to take that concept, embed the task into software that sits on standardized servers, and can be rolled out to the entire environment in a far more rapid, standardized and consistent manner. In addition to that added efficiency, and consistency, NFV can also reduce the company’s reliance on physical switch ports, which lowers the cost in switch gear, the cost in heating/cooling, and the cost in data center space.


In addition to the ease of rolling out new sets of rules, with the added consistency across the entire environment, there comes a new degree of network security. MicroSegmentation is defined as: The process of segmenting a collision domain into various segments. MicroSegmentation is mainly used to enhance the efficiency or security of the network. The MicroSegmentation performed by the switch results in the reduction of collision domains. Only two nodes will be present as a result of the collision domain reduction.


So MicroSegmentation, probably the most important function of NFV, doesn’t actually save the company money in a direct sense, but what it does do is allow for the far more controlled aspect of traffic flow management. I happen to think that this security goal, coupled with the most important ability to roll these rules out globally and identically with a few mouse clicks make for a very compelling product stream.


One of the big barriers of entry in the category, at the moment, is the cost of the product, and a bit of differing approach in each of the product streams. So Cisco’s ACI, for example, and while it attempts to address similar security and consistency goals has a very different modus operandi than NSX from VMware. Of course, there are some differentiations, but in addition, one of the issues is how would the theoretical merging of both ACI and NSX within the same environment work? As I do understand it, the issues could be quite significant… A translation effort, or API to bridge the gap, so to speak, would be a very good idea.


Meanwhile, the ability to isolate traffic, and do it consistently and across a huge environment could prove itself to be quite valuable to enterprises, particularly where compliance, security, and size are issues. I think about multi-tenant data centers, such as service providers, where the data being housed in the data center must be controlled, the networks must be agile, and the changes must take place in an instant are absolutely key for this category of product. However, I also think that for healthcare, higher education, governmental, and other markets, there are big adoption that will take advantage of these technologies.

It has been more than 20 years since HIPAA was enacted on August 21, 1996, and in those two decades it has seen quite a bit of change – especially with regards to its impact on IT professionals. First, with the HITECH Act which established the privacy, security, and breach notification rules in 2003, and then the issuance of the Final Omnibus Rule in 2013. Each advancement served to define the responsibilities of IT organizations to protect the confidentiality, integrity, and availability of electronic protected health information (PHI) – one of the central tenants of HIPAA – and stiffen the consequences for noncompliance. But, despite these changes, HIPAA’s enforcement agency (the Office of Civil Rights or OCR) did not begin to issue monetary fines against covered entities until 2011. And only in recent years, with the announcement of the OCR’s Phase 2 HIPAA Audits, did they begin to set their sights on Business Associates (BAs), those 3rd-party providers of services, which, due to their interaction with ePHI, are now legally required to uphold HIPAA.


2016, however, has seen increasing monetary fines, including a $5.55 million settlement by Advocate Health Care[1]. In 2013, the Advocate data breach released information about 4 million users. This breach occurred two years prior to the Anthem breach, disclosed in March 2015, which affected up to 80 million users, making it the largest health care data breach up to that point. Given the time it takes OCR to analyze and respond to data breaches, don’t expect to see any Anthem data breach analysis from OCR in 2016.


In the meantime, OCR is implementing their Phase 2 audits. This round of audits delves more deeply into organization policies with business associates, and examines some documents from live workflows. Some organizations have already received notification of their Phase 2 audits, which were sent on July 11, 2016. A total of 167 entities were notified of their opportunity to participate in a “desk audit,” which would examine their HIPAA implementation against the new Phase 2 guidelines. This initial audit will cover 200-250 entities, with most of them being completed via the desk audit process. A few entities will be selected for onsite audit[2].


What is a Phase 2 Audit?

First, Phase 2 audits cover both Covered Entities (CEs) and Business Associates. (Recall that the Final Omnibus Rule held CEs and BAs joint and severally liable for compliance). In the pilot phase audits, only Covered Entities were examined. Second, in this phase most of the audits will be completed via a “desk audit” procedure.


A desk audit is a nonintrusive audit where the CE or BA receives an information request from OCR. The information is then uploaded via a secure portal to a repository. The auditors will work from the repository to generate their findings.


Based on the updated Phase 2 audit guidelines, Phase 2 audits will cover the following areas.


Privacy Rule Controls

Uses and Disclosures; Authorizations; Verifications

Group Health Plans

Minimum Necessary Standard; Training

Notices; Personnel designations

Right to Access; Complaints

Security Rule Controls

Assigned Responsibility; Workforce Security

Risk Analysis and Risk Management

Incident Response, Business Continuity and Disaster Recovery

Facility and BA controls

Access Control and Workstation Security

Data Breach

Administrative Requirements

Training; Complaints


Breach; Notice


What is different from previous audits is the format of the audit (desk audit vs onsite) and the focus on reviewing not just policies, but worked examples of policy and procedure implementation[3] using actual samples of the forms and contents, as well as the outcomes of the requests contained within the forms. The complete list of all the elements of these initial Phase 2 audits is extensive. You can read the complete list on the HHS website.


Phase 2 audits require data or evidence of the results of an exercise of some of the HIPAA policies or processes. From an entity perspective that takes the audit to a practical level.


Turning specifically to the audit of security controls, most IT and security pros who have been through an IT risk assessment or an ISO audit will be familiar with the HIPAA audit structure. The approach is traditional; comparing policies to evidence the policy has been correctly implemented.


If you have not been through an audit before, don’t panic. Here are some basic rules.


  1. Be prepared. Review the evidence you will need to provide. Wherever possible, gather that evidence in a separate data room.
  2. Be respectful. Even if this is a desk audit being completed via portal, provide only the evidence asked for, in a legible format, within the time frame requested.
  3. Be honest. If you don’t have the evidence requested, notate that you don’t have it, yet. If you have implemented the control, but just don’t have the evidence, provide documentation of what you have implemented.
  4. Be consistent. To the extent possible, use the same format for providing evidence. If you need to pull logs, put them into a nicely formatted, searchable spreadsheet.
  5. Be structured. Make it easy for the auditor to find and examine your responses. If you need to provide policies, have them neatly formatted in the same font and structure. It’s especially nice if you are an auditor reading lots of documentation to have good section headings and white space between sections. PDF is the most advisable format, but make it searchable for ease of verification.


Recall that the purpose of the Security Rule is to ensure two main things:

  1. That ePHI is correct, backed up, and accessible.
  2. That only appropriate, authorized users and processes have access to ePHI.


You can expect that you will need to provide evidence, mostly logs, that cover the controls that ensure the purpose of the Security Rule is being met. For example:

  1. Who is the responsible security officer, and what is their job description?
  2. When was your risk assessment completed?
  3. Have you implemented any remediation or compensating controls identified in your risk assessment?
  4. Can you demonstrate evidence that security violations are being identified and remediated? Expect your incident response procedures to be examined.
  5. Can you demonstrate that the workforce is complying with your security policies and procedures, including security training?
  6. Can you demonstrate that those who need access to ePHI can do so, and that when authorization is revoked (due to change of status, termination, etc.), that the electronic access is changed as well?
  7. What evidence can you show of anti-malware compliance?
  8. Are your cryptographic controls in place and up to date?  [Hint: see our blog post on PCI DSS 3.2 updates for information on SSL/TLS]
  9. Are your disaster recovery and business continuity plans actionable and tested? Include facilities access plans, which should address physical unauthorized access attempts.
  10. Have you implemented all of the access controls and policies required to share data with BAs?
  11. Can you demonstrate compliance with de-identification disposal requirements of electronic media upon change or de-activation?
  12. Don’t forget contracts. Even though you may be responsible for mostly technical controls, the Security Rule does have requirements for your contracts with BAs.


As with most compliance schemes, a number of the requirements are well understood standard security best practices. As our panel at THWACKcamp [See the Shields Up Panel Recap] agreed, the old adage, “If you are secure you are probably compliant,” applies to HIPAA, too.

Have you had a recent audit experience you’d like to share?  Please comment on this post. We can all learn from each other’s experiences. Happy auditing!





Getting Started

The network monitoring piece of network management can be a frightening proposition. Ensuring that we have the information we need is an important step, but it's only the first of many. There's a lot of information out there and the picking and choosing of it can be an exercise in frustration.

A Story

I remember the first time I installed an intrusion detection system (IDS) on a network. I had the usual expectations of a first-time user. I would begin with shining a spotlight on denizens of the seedier side of the network as they came to my front door. I would observe with an all-seeing eye and revel in my newfound awareness. They would attempt to access my precious network and I would smite their feeble attempts with.... Well, you get the idea.


It turns out there was a lot more to it than I expected and I had to reevaluate my position. Awareness without education doesn't help much. My education began when I realized that I had failed to trim down the signatures that the IDS was using. The floodgates had opened, and my logs were filling up with everything that had even a remote possibility of being a security problem. Data was flowing faster than I could make sense of it. I had the information I needed and a whole lot more, but no more understanding of my situation than I had before. I won't even get into how I felt once I considered that this data was only a single device's worth.


After a time, I learned to tune things so that I was only watching for the things I was most concerned about. This isn't an unusual scenario when we're just getting started with monitoring. It's our first jump into the pool and we often go straight to the deep end, not realizing how easy it is to get in over our heads. We only realize later on that we need to start with the important bits and work our way up.

The Reality

Most of us are facing the task of monitoring larger interconnected systems. We get data from many sources and attempt to divine meaning out of the deluge. Sometimes the importance of what we're receiving is obvious and relevant. eg. A message with critical priority telling us that a device is almost out of memory. In other cases, the applicability of the information isn't as obvious. It just becomes useful material when we find out about a problem through other channels.


That obvious and relevant information is the best place to start. When the network is on the verge of a complete meltdown, those messages are almost always going to show up first. The trick is in getting them in time to do something about them.


Most network monitoring installations begin with polling devices for data. This may start with pinging the device to make sure it's accessible. Next, comes testing the connections to the services on the device to make sure that none of them have failed. Querying the device's well-being with Simple Network Management Protocol (SNMP) usually accompanies this too. What do these have in common? The management station is asking the network devices, usually at five minute intervals, how things are going. This is essential for collecting data for analysis and getting a picture of how things are going when everything is running. For critical problems, something more is needed.


This is where syslog and SNMP traps come into play. This data is actively sent from the monitored devices as events occur. There is no waiting for five minute intervals to find out that the processor has suddenly spiked to 100% or that a critical interface has gone down. The downside is that there is usually a lot more information presented than is immediately necessary. This is the same kind of floodgate scenario I ran into in my earlier example. Configuring syslog to send messages at the "error" level and above is an important sanity-saving measure. SNMP traps are somewhat better for this as they report on actual events instead of every little thing that happens on the device.

The Whisper in the Wires

Ultimately, network monitoring is about two things:


  1. Knowing where the problems are before anyone else knows they're there and being able to fix them.

  2. Having all of the trend data to understand where problems are likely to be in the future. This provides the necessary justification to restructure or add capacity before they become a problem.


The first of these is the most urgent one. When we're building our monitoring systems, we need to focus on the critical things that will take our networks down first. We don't need to get sidetracked by the pretty pictures and graphs... at least not until that first bit is done. Once that's covered, we can worry about the long view.


The first truth of RFC 1925 "The 12 Networking Truths" is that it has to work. If we start with that as our beginning point, we're off to a good start.

The PCI Data Security Standards define the security practices and procedures that govern the systems, services, and networks that interact with cardholder or sensitive payment authentication data. The environment in which cardholder data flows is defined as the cardholder data environment (CDE) and comprises the “people, processes, and technologies that store, process, or transmit cardholder data or sensitive authentication data”. 


While some PCI deployments are simple, such as a single Point of Sale terminal directly connected to a merchant authority, other deployments, whether interacting with older systems, or deployments that have store and forward needs, or use cases where an acquirer of cardholder data needs to transmit or share data with another service provider, are more complicated. You may find yourself needing a solution that allows you to transfer cardholder data while maintaining PCI compliance.


When you need to move PCI data, whether within the CDE or for further processing outside of the CDE, you can use a managed file transfer (MFT) solution to accomplish this task. In this situation, you need to ensure that the MFT complies with all aspects of PCI DSS.


The main requirement governing data transfer is Requirement 4, which states that cardholder data must be encrypted when transmitted across open, public networks. More specifically the encryption implementation must ensure:


1. Only trusted keys and certificates are accepted.

2. The protocol in use only supports secure versions and configurations.

3. The encryption strength is appropriate for the encryption methodology in use.


For file transfer, the usual transports are either FTP over SSL, which runs the traditional FTP protocol tunneled through an SSL session, or HTTP running over SSL/TLS. Occasionally SSH2 is needed and may be used in situations where it is not possible to set up bi-directional secure transfers, or when only an interim transfer is needed.


A properly configured managed file transfer solution will enable users to:


1. Automatically transfer cardholder data for further processing

2. Support ad hoc secure transfers

3. Generate onetime use secure transfer links


However, care must be taken to adhere to new PCI DSS 3.2 authentication and encryption requirements, as well as to ensure cardholder data is kept only for the time necessary to achieve the legitimate business need. We will address each of the new PCI requirements to ensure you can safely continue to use your managed file transfer solution.


Multifactor Authentication

PCI DSS 3.2 clarifies that any administrative, non-console access to the cardholder environment must support multi-factor authentication. This means multiple passwords or passwords plus security questions are no longer valid authentication protocols.


For years web application and even SSH access has relied upon simple security questions, or even just user ID and password, to properly identify themselves to systems. Unfortunately as seen in the recent Yahoo data breach disclosure , security questions may be kept in the clear, and such questions are often chosen from a standard list.


From a PCI managed file transfer authentication requirements perspective, 3.2 multifactor authentication only impacts user to server initiated transfers, or administrative access to a server located in the cardholder data environment. If you are currently using either of these two scenarios with only password authentication, should plan for migration by February 2018.  You can read more about new PCI authentication requirements the PCI 3.2 changes blog post here:



The changes to PCI 3.2 regarding encryption are more extensive than the authentication requirements. The most common transport layer encryption used for managed file transfer will depend upon SSL/TLS protocols to deliver the security for data in motion. Early versions of SSL/TLS have known vulnerabilities that make them unsuitable for ongoing use in managed file transfer according to the new PCI standard. Although the 3.2 requirements permit the use of SSL/TLS if properly patched, configured, and managed there is no need to use these older versions of SSL/TLS in a managed file transfer environment as most systems and browsers have been updated to support TLS 1.2 for some time. That said even when configuring your server to accept the TLS 1.2 protocol and above is still the matter of which cipher suites to select. TLS 1.2 supports over 300 cipher suites and not all of them are acceptable for use with cardholder data.


PCI DSS 3.2 does not directly specify the cipher suites to use with TLS, leaving the implementer with the following requirement “The encryption strength is appropriate for the encryption methodology in use” . PCI does provide additional guidance and points to NIST publication 800-52, which was last updated in April 2014. However, since that publication date several critical vulnerabilities have been found in the implementations of certain cipher suites used by SSL/TLS and additional vulnerabilities have been found in OpenSSL, which is a commonly used library. These include:


- Freak , which forces a downgrade to an exploitable version of RSA

- Drown , which relies upon a server supporting SSLv2 to compromise a client using TLS

- Five critical vulnerabilities in the OpenSSL implementation reported September 16, 2016


From NIST 800-52 the following cipher suites for TLS 1.2 servers are recommended:









Care must be taken to ensure that null ciphers, and lower grade encryption ciphers are not configured by default, as these ciphers can be used in Man in the Middle Attacks. To mitigate this risk, OWASP  recommends using a whitelist approach, which means either limiting your server to only use certain ciphers, such as those specified above, or if you cannot whitelist your cipher suites, ensuring that you disable weak cipher suites .


The cipher suite is not the only cryptographic element of your managed file transfer solution. The SSL/TLS server also needs a private key. The private key should be generated by a known Certificate Authority, in an X.509 or PKI certificate. Furthermore, in order to be PCI compliant your certificate should meet NIST SP 800-57 key management requirements. From a practical perspective OWASP recommends server certificates should:


1. Use a key size of at least 2048 bits

2. Not use wild card certificates

3. Not use SHA-1

4. Not use self-signed certificates

5. Use only fully qualified DNS names


NIST 800-57 provides detailed guidance on protecting private keys and from a PCI perspective the important elements of key management are:


Ensuring the integrity of the private key from:

1. Accidental or intentional reuse, modification, compromise

2. Exceeding the relevant cryptographic period (how long a private is expected to be in use)

3. Incorrect configuration of a private key


It may seem like overkill to be so focused on encryption protocols, cipher suites and private keys, however if the private key is compromised, as it was with the Sony PlayStation 3 , your entire system is now vulnerable.


Storing Cardholder Data

While there are no changes to requirements around storing cardholder data in PCI 3.2, if you do use managed file transfer you are storing cardholder data. Along with the technical guidelines on storing cardholder data , consider how you are going to mitigate the risk of accidental disclosure by removing any files containing cardholder data as soon as possible after the business use is completed. Having a policy that establishes data retention, secure destruction, and logging the execution of these activities will ensure you maintain PCI compliance.


There are other requirements associated with any system or solution that operates under PCI but the new requirements for PCI 3.2 focus on authentication and encryption. By working with your IT staff in advance and detailing your PCI use cases and requirements with a focus on authentication and encryption you can confidently deploy managed file transfer in your PCI environment.


Do you use file transfer solutions today?  Are you comfortable with the security they provide for Personally Identifiable Information?

Infrastructure automation is nothing new. We’ve been automating our server environments for years, for example. Automating network devices isn’t necessarily brand new either, but it’s never been nearly as popular as it has been in recent days.


Part of the reason network engineers are embracing this new paradigm is because of the potential time-savings that can be realized by scripting common tasks. For example, I recently worked with someone to figure out how to script a new AAA configuration on hundreds of access switches in order to centralize authentication. Imagine having to add those few lines of configuration one switch at a time – especially in a network in which there were several different platforms and several different local usernames and passwords. Now imagine how much time can be saved and typos avoided by automating the process rather than configuring the devices one at a time.


That’s the good.


However, planning the pseudocode alone became a rabbit hole in which we chased modules on GitHub, snippets from previous scripts, and random links in Google trying to figure out the best way to accommodate all the funny nuances of this customer’s network. In the long run, if this was a very common task, we would have benefited greatly from putting in all the time and effort needed to nail our script down simply because it would then be re-usable and shareable with the rest of the community. However, by the time I checked in again with some more ideas, my friend was already well underway configuring the switches manually simply because it was billable time and he needed to get the job done right away. There’s a balance between diminishing returns and long-term benefits to writing code for certain tasks.


That’s the bad.


We had some semblance of a script going, however, and after some quick peer review we wanted to use it on the remaining switches. Rather than modify the code to remove the switches my friend already configured, we left it alone because we assumed it wouldn’t hurt to run the script against everything.


So we ran the script, and several hundred switches became unreachable on the management network. Nothing went hard down, mind you, but we weren’t able to get into almost the entire access layer. Thankfully this was a single campus of several buildings using a lot of switch stacks, so with the help of the local IT staff, the management access configuration on all the switches was rolled back the hard way in one afternoon. This happened as a result of a couple guys with a bad script. We still don’t really know what happened, but we know that this was a human error issue – not a device issue.


That’s the ugly.


Network automation seeks to decrease human error, but the process requires skill, careful peer review, and maybe even a small test pool. Otherwise, the blast radius of a failure could be very large and impactful. There is also great automation software out there with easy-to-use interfaces that can enable you to save time without struggling to learn a new programming language.


But don’t let that dissuade you from jumping with both feet into learning Python and experimenting with scripting common tasks. In fact, there are even methods for preventing scripting misconfigurations as well. Just remember that along with the good, there can be some bad, and if ignored, that bad could get ugly.

Cloud Dollars.png


The Cloud! The Cloud! Take us to the Cloud it’s cheaper than on-premises, why? Because someone in marketing told me so!  No, but seriously. Cloud is a great fit for a lot of organizations, a lot of applications, a lot of a lot of things! But just spitting ‘Cloud’ into the wind doesn’t make it happen, nor does it always make it a good idea.   But hey, I’m not here to put Cloud down (I believe that’s called Fog) nor am I going to tout it unless it’s a good fit.   However, I will share some experiences, and hopefully you’ll share your own because this has been a particular area of interest lately, at least with me but I’m weird about things like deep tech and cost benefit models.


The example I’ll share is one which is particularly dear to my heart. It’s dear because It’s about a Domain Controller!   Domain Controllers are for all intents and purposes, machines which typically MUST remain on at all times, yet don’t necessarily require a large amount of resources.  So when you compare a domain controller running On-Premises let’s say as a Virtual Machine in your infrastructure it carries with it an arbitrary cost aggregated and then taken as a percentage of the cost of your Infrastructure, Licensing, allocated resources, and O&M Maintenance cost for Power/HVAC and other.   So how much does a Domain Controller running as a Virtual Machine run inside your data center? If you were not to say, “It Depends” I might be inclined not to believe you, unless you do detailed charge back for your customers.


Yet, we’ve stood up that very same virtual machine inside of Azure, let’s say a standard Single Core, Minimal memory A1-Standard instance to act as our Domain Controller.   Microsoft Azure pricing for our purposes was pretty much on the button, coming in at around ~$65 per month.   Which isn’t too bad, I always like to look at 3 years at a minimum for the sustainable life of a VM just to contrast it to the cost of on-premises assets and depreciation.   So while $65 a month sounds pretty sweet, or ~$2340 over three years I have to also consider other costs which I might not normally be looking at.  Egress network bandwidth, Cost of backup (Let’s say I use Azure backup, that adds another $10 a month, so what’s another $360 for this one VM)


The cost benefits can absolutely be there if I am under or over a particular threshold, or if my workloads are historically more sporadic and less ‘always-on, always-running’ kind of services.

An example of this, is we have a workload which normally takes LOTS of resources and LOTS of cores and runs until it finishes.   We don’t have to run it too often (Quarterly) and allocating those resources, obtaining the assets while great, they’re not used every single day.   So we spin up a bunch of Compute or GPU Optimized jobs and when it might have taken days or weeks in the past we can get it done in hours or days, which means we get results and we release the resources once we get our data dumped out.


Certain workloads will tend to be more advantageous to others to be kept on-premises or hosted exclusively in the cloud, whether sporadically or all the time.   That really comes down to what matters to you, your IT and your support organization.


This is where I’m hoping you my fellow IT Pros can share your experiences (Good, Bad, Ugly) about workloads you have moved to the Clouds, I’m preferable to an Azure, Google or Amazon as they’ve really driven things down to a commoditized goods and battle amongst themselves, whereas an ATT, RackSpace, and other ‘hosted’ facility type cloud can skew the costs or benefits when contrasted to the “Big Three”


So what has worked well for you, what have you loved and hated about it. How much has it cost you? Have you done a full shift taking ALL your workload to a particular cloud or Clouds. Have you said ‘no more!’ and taken workloads OFF the Cloud back On-Premises? Share your experiences so that we may all learn!


P.S., We had a set of Workloads hosted Off-Premises in Azure which were brought wholly back in house as the high performance yet persistent always-on nature of the workloads was costing 3x-4x more than if we had simply bought the Infrastructure and hosted it internally. (Not every workload will be a winner )


Thanks guys and look forward to hearing your stories!


Normalcy is boring, or is it?

Posted by Dez Employee Nov 3, 2016


Normalcy is boring, or is it?

          Something that I have been working on is helping to come up with a baseline security plan for an IT team and their infrastructure.  What I have ran into is that having a basic template and starting point really helps.  Fantastic right?  Well, when I start off by giving them credit for monitoring they look peculiar at me as in why would monitoring be a starting point?  To be fair and accurate a few high five me as they are like SAWEETNESS (meant to be spelled wrong as that literally is how I speak, ok back to the blog ) check that off the list of things to come!  Today, I'm going to go over this one portion of the plan and show why "knowing normal" is actually a starting point for a great security best practices and policies.


     First things first,my favorite quote "If you don't know what's normal how the heck do you know when something's wrong?".  Baseline and accurate monitoring history will show you whats normal.  This also will show you how your infrastructure handles new applications and loads when you are monitoring so its not just for up down that is just a side perk honestly.


Ok, now once you know what normal is the following will help you to see issues easier and be aware.  So remember the below is once you have monitored and understand your normalcy of your devices your monitoring.


Monitoring security features

  • Node -  up/down
    • This will show you if there is a DoS happening or a configuration error with no ability to ping a device. 
    • Will show you areas within your monitoring that are being possibly attacked.
    • Allows you to have a clear audit of the event that are taking place so you can use for management and your team for assessments.
  • Node - CPU/Memory/Volume
    • CPU will show you if there is an increase spike as that will help to show where to look for what increased or caused this spike that never went away.
    • Memory allows you to know if there is a spike obviously something is holding it hostage and you need to address this and prevent or resolve. 
    • Volume if you see a drive increase its capacity OR decrease quickly and are alerted to this you may be able to stop things like ransom ware quickly.  The trick is to be monitoring AND have alerts setup to make you aware of drastic changes.
  • Interface - utilization
    • Utilization will show you if a sudden increase of data is transferring into or out of an interface.
  • Log File monitoring
    • Know when AD attempts are failing.
      • This is something I see a lot of times and the person monitoring just states "yes, but its just an old app making the request no biggy".  Ok, to me I'm like fix the old application so this is no longer NOISE and when you have these coming in from outside this app you are more inclined to investigate and stop the whole.
    • Encryption know if files are being encrypted on your volumes
    • Directory changes if directory/file changes are happening you need to beware period
  • Configuration monitoring
    • Real-time change notification that compares to the baseline config is vital to make sure no one is changing configurations outside of your team.  Period end OF STORY.  (I preach this a lot I know.  #SorryNotSorry)
  • Port monitoring
    • rogue devices plugging into your network needs to be known when and who immediately


          This is obviously not all the reasons you can use against normalcy but its once again a start.  Understanding normal is vital to set up accurate alerts, reports, and monitoring features.  As you hone in your skills on assessing what you are monitoring and alerting you'll see things drop off while others will increase within your environment.


          Don't be shy to ask questions like, why is this important?  I seen this article on an attack, how can we be alerted in the future if this happens to us?  Some of the best monitoring I've seen is due to looking through THWACK and reading articles on what's going on in mainstream.  Bring this knowledge to your monitoring environment and begin crafting an awesome arsenal against, well, the WORLD.





Don’t you love an intriguing headline? I’m not going to talk about our beloved IT jobs just yet, but some other industries are in for a wake-up call.


Microsoft Research made an interesting announcement recently. Tasked with improving digital speech recognition, Xuedong Huang (Microsoft’s chief speech scientist) declared 12 months ago ““Speech technology is so close to the level of human performance,” he said. “I believe in the next three years we’ll reach parity.” They’ve done it in less than a year.


Microsoft’s speech recognition system is now making the same or fewer errors than professional human transcriptionists. The 5.9 percent error rate is about equal to that of people who were asked to transcribe the same conversation, and it’s the lowest ever recorded against the industry standard Switchboard speech recognition task. “We’ve reached human parity,” said Xuedong Huang, the company’s chief speech scientist. “This is an historic achievement.” Deep neural networks with large amounts of data were key to this technology breakthrough. Does it spell the end for human transcriptionists? Services like Facebook and YouTube already offer automatic captioning and their technology is only going to get better.


We’re on the brink of coding computers to make better recommendations than we do.


The Xero financial software is a great example of this. At this year’s Xerocon event in Brisbane, Australia, CEO Rod Drury showed off plans to remove the option for adding accounting codes when entering invoices. Why remove this fundamental accounting task feature? Because humans are stuffing it up and machines can do it better. In Xero’s tests, it took a very small amount of time for the machines to learn which things to code to which accounts, resulting in a lower error rate than when a human inputted the data. Watch out, bookkeepers. The technology enhancements in the financial services industry are just getting started.


Again, the key is access to data. Machines can hold significant more historical data than the human brain can. They are faster at analyzing it and they are better at identifying unexpected patterns. Have you ever run Insights across data in Microsoft’s Power BI? The question in the air is whether this analysis will see bots become better financial advisers than our accountants, based on historical trends and current economic analysis.


Just as IT Pros are told not to ignore the rise of Cloud computing; other industries should be careful not to sleep when machine learning and AI are on the rise. This is the real digital disruption for the companies that we provide IT support for.


Are we prepared for it?



As the 2016 year is coming to the end, I've looked back and wow it has been the year of the upgrades for me at my day job. While they have all been successful (some took longer than expected ) , there were bumps, tears and even some screaming to get to the finish line. My team and I are seasoned IT professionals but that didn't stop us from making mistakes and assumptions. What I've learned after doing 5 major upgrades this year is that never assume and always be prepared for the worst and hope for the best.


As you embark on the annual journey of upgrades there are so many factors to look at to make sure that it is successful. While it may seem trivia at times depending on the upgrade you do, it never hurts to go through a basic upgrade run through, like a playbook or if you have a PMO work with a Project Manager. Project Managers can be life savers! But you do not need a Project Manager if you take the time to go through and gather all the information and requirements as part of your planning.


After looking back through all the upgrades I've done this year,  I decided to write this post and hope that we can all learn a little something with the lessons we've learned and can be avoided by others.

Let’s get back to basics…

Once we start talking upgrades let's go back to the basics and answer the five “Ws” and get some requirements; WHAT, WHY, WHO, WHERE, and WHEN. Understanding those basic requirements goes a long way. It provides the basic foundation for understanding the situation and what all needs to be done.

WHAT- Let’s first ask what are we upgrading? Is this a server operating system upgrade or an application upgrade? Determining the type of upgrade is vital because this will affect the answers to your other requirements. Once you know what you are upgrading you will need to determine if your current environment can support the upgrade. Depending on what you are upgrading, it can feel like opening a can of worms, as you find you may need to upgrade other systems to make sure it’s compatible with the upgrade you are trying to complete. You may also find out that the upgrade reaches beyond your realm of responsible and crosses over into our departments and function. A “simple” application upgrade can end up costing millions if your current environment does not support all components.


Some examples questions to ask:

  1. If you're doing an application upgrade does your current hardware specs meet the recommendations for the newer version? If it does not, you may need to invest in new hardware.
  2. Is this an operating system upgrade?
  3. Is this an in-place upgrade or parallel environment?
  4. Or a complete server replacement?
  5. Can you go direct to the new version or will you need to install CU’s to get current?
  6. Can your current network infrastructure support the upgrade? Does it require more bandwidth?
  7. If you are using Load Balancers or Proxy Servers, do those support the upgrade?
  8. Are there client applications that connect to your systems and are you running supported versions of the client applications?
  9. Do you have Group Policies that need to be modified?
  10. What other applications “connect” maybe impacted?
  11. Are there any legacy customizations in the will be impacted?
  12. Will there licensing impacts or changes with the upgrade?


Sample upgrade scenario:


An application like Exchange Server has far reaching impacts beyond just the application. If an Exchange DAG is implemented the network must meet certain requirements to satisfy successful replication between the databases across datacenters. Working with your network team ensures your requirements are met. You will may possibly need the storage team if you are using a SAN for your storage which may require new hardware and we all know that can be a project in itself to upgrade a SAN.


An often overlooked item is the client connection to exchange. What version of Outlook are users using to connect to get to their email? If you are using an unsupported version of outlook users may have issues connecting to email. Which we all know would be a nightmare to deal with. Let’s look at the impact of outlook versions to an exchange upgrade. If your outlook versions are not supported, you will need to work with the desktop teams to get everyone to a supported version.  This can be costly, from the support to implementing and deploying the upgrade to supported outlook versions and then depending on how your Microsoft Office is licensed you may need to buy additional licenses and we all know that isn’t cheap. 


WHY - Let’s ask why are you doing the upgrade? Is the upgrade needed to address an issue or this to say current? Once this has been identified, you can find out what and or if new features you are going to be getting and what value does it bring to the table.


Additional questions to ask:


  1. Will any new features impact my current environment?
  2. If I am addressing an issue with the upgrade, what is it fixing and are there other workarounds?
  3. Will the upgrade break any customizations that maybe in the environment?
  4. Can the upgrade be deferred?


WHO- Once you’ve figured out the “WHAT” you will know the “WHO” that need to be involved. Getting all the key players will help make sure that you have your ducks in a row.


  1.     What other teams will you need to have involved?


      • Network team
      • Security Team
      • Storage Team
      • Database Team
      • Server Team
      • Application Team
      • Desktop Support
      • Help Desk
      • Project Management Team
      • Testing and certification Team
      • Communications team to inform end users
      • Any other key players – external business partners if your systems are integrated


In certain cases, you may need even need a technology partner to help you do the upgrade. This can get complicated as you will need to determine who is responsible for each part of the upgrade. Having a partner do the upgrade is convenient as they can assume the overall responsibility of the success of the upgrade and you can watch and learn from them. Partners can bring value as they are often “experts” and have done these upgrades before and should know the pitfalls and what to watch out for. If you are using a partner, I would recommend you do your own research in addition to the guidance and support provided by the partner because sometimes the ball can be dropped on their end as well. Keep in mind they are humans and may not know all about a particular application, especially it’s very new.


WHEN- When are you planning to do the upgrade? Most enterprises do not like disruptions so you will need to determine if this must be done on the weekends or can you do the upgrade without impacting users in production during the weekday.


The timing of your upgrade can impact other activities that maybe going on in your network. For example, you probably do not want to be doing an application upgrade like Skype for Business or Exchange the same weekend the Network team is replacing/upgrading the network switches. This could have you barking up trees when there isn’t really need to be.


WHERE– This may seem like an easy question to answer but depending on what your upgrading you may need to make certain arrangements. Let’s say your replacing hardware in the datacenter, you will certainly need someone in the datacenter to be able to perform the switch out. If your datacenter is hosted, perhaps you will need hands on tech to perform a reboot of the physical servers in the event the remote reboot doesn’t work.


I’ve been in situations where the reboot button doesn’t work and the power cord of the server had to be pulled to reboot the server back online, this involved getting someone on in the datacenter to do that. Depending on your setup and processes this may require you to put support tickets in advance and coordination with the hosting datacenter hosting team. Who wants to sit waiting around for several hours to have a server rebooted just to progress to the next step in an upgrade?



HOW - How isn't really a W but it is an important step. Sometimes the HOW can be answered by the WHAT, but sometimes it can't so you must ask "HOW will this get upgraded?". Documenting the exact steps to complete the upgrade, whether it's in place or parallel environment will help you identify any potential issues or key steps that maybe missing from the plan. Once you have the steps outline in detail it's good to do a walk through of the plan with all involved parties so all the expectations are clear and set. This is also helps prevent any scope creep that could appear along the way. Having a documented detailed step plan will also help during the actual upgrade in event something goes wrong and you need to do troubleshooting.


Proper Planning keeps the headaches at bay…


It would seem common sense and almost a standard to answer the 5 W’s when doing upgrades but you would be surprised but how often how many questions are not asked. Too often we get comfortable in our roles and overlook the simple things and make assumptions. Assumptions can lead to tears and headaches if they cause a snag in your upgrade. However, lots of ibuprofen can be avoided if we plan as best as can and go back to the basics of asking the 6 W’s for information gathering.

Home for a week after the PASS Summit before heading back out to Seattle on Sunday for the Microsoft MVP Summit. It's the one week a year where I get to attend an event as an attendee and not working a booth or helping to manage the event. That means, for a few days I get to put my learn on and immerse myself in all the new stuff coming to the Microsoft Data Platform.


As usual, here's a bunch of links I found on the Intertubz that you might find interesting, enjoy!


AtomBombing: The Windows Vulnerability that Cannot be Patched

I've been digging around for a day or so on this new threat and from what I can tell it is nothing new. This is how malware works, and the user still needs to allow for the code to have access in the first place (click a link, etc.). I can't imagine who among us falls for such attacks.


This is the email that hacked Hillary Clinton’s campaign chief

Then again, maybe I can imagine the people that fall for such attacks.


Apple's desensitisation of the human race to fundamental security practices

And Apple isn't doing us any security favors, either.


Mirai Malware Is Still Launching DDoS Attacks

Just in case you thought this had gone away for some reason.


Earth Temperature Timeline

A nice way of looking at how Earth temperatures have fluctuated throughout time.


Surface Studio zones in on Mac's design territory

We now live in a world where Microsoft hardware costs more than Apple hardware. Oh, and it's arguably better, too, considering the Surface still has the escape and function keys.


Swarm of Origami Robots Can Self Assemble Out of a Single Sheet

Am I the only one that's a bit creeped out by this? To me this seems to be getting close to having machines think, and work together, and I think we know how that story ends.


Management in ten tweets

Beautiful in its simplicity, these tweets could serve as management 101 for many.


I wasn't going to let anyone use the SQL Sofa at PASS last week until I had a chance to test it first for, um, safety reasons.

couch - 1.jpg

It is a good time to remember that improving agency IT security should be yearlong endeavor. Before gearing up to move forward with implementing new fiscal year 2017 IT initiatives, it is a best practice to conduct a security audit to establish a baseline and serve as a comparison to start thinking about how the agency’s infrastructure and applications should change, and what impact that will have on IT security throughout the year.


Additionally, security strategies, plans and tactics must be established and shared so that IT security teams are on the same page for the defensive endeavor.


Unique Security Considerations for the Defense Department


Defense Department policy requires agencies follow NIST RMF to secure information technology that receives, processes, stores, displays, or transmits DOD information. I’m not going to detail the six-step process—suffice it to say, agencies must implement needed security controls, then assess whether they were implemented correctly and monitor effectiveness to improve security.


That brings us back to the security audit: A great way to assess and monitor security measures.


Improving Security is a Year-Round Endeavor


The DOD has a complex and evolving infrastructure that can make it tricky to detect abnormal activities and ensure something isn’t a threat, while also not prohibiting legitimate traffic. Tools such as security information and event management platforms automate some of the monitoring to lessen the burden.


The tools should automate the collection of data and analyze it for compliance, long after audits have been completed.


It should also be easy to demonstrate compliance using automated tools. Automated tools should help to quickly prove compliance, and if the tools come with DISA STIGs and NIST FISMA compliance reports, that’s another huge time-saver.


Performance monitoring tools also improve security posture by identifying potential threats based on anomalies. Network, application, firewall and systems performance management and monitoring tools with algorithms that highlight potential threats effectively ensure compliance and security on an ongoing basis.


Five additional best practices help ensure compliance and overall secure infrastructure throughout the year:


  • Remove the need to be personally identifiable information (PII) compliant, unless it’s absolutely critical. For example, don’t store stakeholder PII unless required by agencies processes. Not storing the data mitigates responsibility risks for securing it.


  • Remove stored sensitive information that isn’t needed. Understand precisely what and how data is stored and ensure what is kept is encrypted, making it useless to attackers.


  • Improve network segmentation. Splitting the network into discrete “zones” boosts performance and improves security, a win-win. The more a network is segmented, the easier it will be to improve compliance and security.


  • Eliminate passwords. Think about all the systems and applications that fall within an audit zone, and double check proper password use. Better yet, eliminate passwords and implement smart cards, recognized as an industry best practice.


  • Build a relationship with the audit team. A close relationship with the audit team ensures they can be relied upon for best practices and other recommendations.


  Find the full article on Signal.

Filter Blog

By date:
By tag: