1 15 16 17 18 19 Previous Next

Geek Speak

1,513 posts

Latency is the principal enemy of an administrator. If your virtual infrastructure running smoothly and latency is at acceptable level then everything is fine, but if latency at the storage side or on the network goes through the ceiling, then you're in trouble. You might be also running very sensitive - from latency perspective - applications.

 

Concerning network latency, there can be a situation where you can tweak few things not only on the virtual infrastructure, but also on physical infrastructure. If the packets traveling from one VM living on one ESXi host, to another VM living on another ESXi host, at one moment they cross a physical switch. Few improvements can be achieved on the physical switch.

 

How to "fight" latency and which best fine tuning the admin can implement for very latency sensitive VMs?

 

  • Bios Settings - often new servers with newer CPUs from intel or AMD can be set to power saving, but High power is the only way to go! C-State increases latency too, even if saving energy, so if you're using latency sensitive application and you are chasing every single microsecond of latency in your environment, then you need to disable C-state. Also make sure that vSphere (ESXi) is set to high performance. Depending of your hardware manufacturer, you should check its documentation for best possible performance settings too.

ballanced.jpg

  • NUMA - Processor affinity for vCPU should be shedulled on specific NUMA nodes and also memory affinity for all VM memory shall be allocated from those NUMA nodes. Example for vSphere you can enable it through Manage > Settings > Edit > Advanced > Edit configuration button >  where you'll need to Add new option "numa.nodeAffinity" and as a value you'll use coma sepparated list for multiple nodes. Depending of your virtualization platform, if it's vSphere, then it depends of your version. Check the documentation for "numa.nodeAffinity".

 

  • Physical and virtual NIC settings - It's possible to disable interrupt moderation on physical NICs, which will be beneficial to achieve lower latency for very sensitive, low-latency applications. The feature is also called interrupt throttling. It's been instated however to prevent the particular host to get overwhelmed with CPU cycles which only treats interrupts. However keep in mind that even if disabling interrupt moderation on physical NICs is benefical for very-latency-sensitive-VMs, it brings some CPU overhead on the host and can possibly affect other VMs running on the host from the performance perspective. Virtual NICs tweaks - here is important to choose the right vNIC type, like VMXNET3 which is now the default type for most Guest OS, but you should check it out, especially for applications and vMs that you really need the best possible performance. It's also possible to disable virtual interrupt coalescing on the vNIC (by adding "ethernetX.coalescingScheme" with value "disabled"). Host based settings is also possible however it will affect ALL virtual machines running on the particular host.

 

  • Virtual disk SCSI controller choice - depending of the OS you're using on particular VM(s) it can be useful to change from what's usually the default one (LSI Logic SAS) into VMware Paravirtual, which leads to lower CPU utilization and higher throughput (Roughly 7-10%). To deliver the same number of IOPS it uses less CPU cycles (10% lower CPU utilization). Now it's perhaps the time to use it at past vSphere 4.1 problems were already solved long time ago! Its the most effective driver, which has the most efficiency. Note that only 4 PVSCSI controllers per VM are currently supported.

 

  • Guest OS optimizations - what to say on GuestOS optimization. I've already mentioned the importance of vCPU and memory sizing. But there are also possible other tweaks - at the application level and guestOS level. Depending of the workloads you're running, you can find specific optimization guides for VDI deployments where you need to tweek the master image, deactivate some services etc, for server based OS, things are usually based on VM after VM optimization. But basically you can already start with the virtual hardware, where you can delete the unnecessary floppy drives, COM ports or USB ports. Then other important application which often uses some huge in guest resources is Java.
  • There is a configuration for using large memory pages. This has to be done on the Java side by adding a command line option when launching java (-XX: +UseLargePages) - see info here. Where on the GuestOS tweak is often called "Lock pages in Memory" and can be done for a VM or Group of VMs via GPO (see here). But VMware ESXi settings are beneficial but do have some drawbacks too, considering that there are other optimization techniques involved as well.

 

Wrap up:

 

In few articles I tried to help other admins, users of virtualization technologies, to make the most benefit of their virtual infrastructures. The posts can be found here:

theguinn

This weekend is TOWEL DAY!

Posted by theguinn May 23, 2014

Towel-day-Infographic1.jpgAs I'm trying to explain Towel Day to the non-geek folks in my life, I'm struck yet again with how few people know about the brilliance of Douglas Adams. The conversations always seem to go something like this. "Seriously?! You've never heard of Hitchhiker's Guide? Hollywood even made that one into a (bad) movie?! I suppose that means you've never read Dirk Gently, Last Change to See, or seen the couple of Doctor Who episodes he wrote." /sigh. Blank stares and I change the subject.

 

Stephen Fry, another genius of our time, has said "He changed the way people spoke. You still hear some of his jokes from the Hitchhiker's books being told in pubs." If you are not an Adams fan, I will avoid waxing poetic about this awesome man and his work and simply tell you to go. Go pick up any of his works and you'll see why he's such a staple of a community that holds intelligence above all and isn't afraid of being a bit silly. Or just read Wikipedia http://en.wikipedia.org/wiki/Douglas_Adams. That's what Ford Perfect would probably do.

 

Towel Day was created as the ongoing celebration of the late Douglas Adams' (1952-2001) life and work. The first Towel Day was decreed exactly two weeks after Adams death on May 25th, as a day designated for an international celebration of his life.

 

The great man himself describes the usefulness of a towel:

"A towel, it says, is about the most massively useful thing an interstellar hitchhiker can have. Partly it has great practical value. You can wrap it around you for warmth as you bound across the cold moons of Jaglan Beta; you can lie on it on the brilliant marble-sanded beaches of Santraginus V, inhaling the heady sea vapours; you can sleep under it beneath the stars which shine so redly on the desert world of Kakrafoon; use it to sail a miniraft down the slow heavy River Moth; wet it for use in hand-to-hand-combat; wrap it round your head to ward off noxious fumes or avoid the gaze of the Ravenous Bugblatter Beast of Traal (such a mind-bogglingly stupid animal, it assumes that if you can't see it, it can't see you); you can wave your towel in emergencies as a distress signal, and of course dry yourself off with it if it still seems to be clean enough.

 

More importantly, a towel has immense psychological value. For some reason, if a strag (strag: non-hitch hiker) discovers that a hitchhiker has his towel with him, he will automatically assume that he is also in possession of a toothbrush, face flannel, soap, tin of biscuits, flask, compass, map, ball of string, gnat spray, wet weather gear, space suit etc., etc. Furthermore, the strag will then happily lend the hitch hiker any of these or a dozen other items that the hitch hiker might accidentally have "lost." What the strag will think is that any man who can hitch the length and breadth of the galaxy, rough it, slum it, struggle against terrible odds, win through, and still knows where his towel is, is clearly a man to be reckoned with.

 

  — Douglas Adams, The Hitchhiker's Guide to the Galaxy

 


I invite you to pop on over to http://www.towelday.org/ and maybe take in an event. Or maybe just snuggle up with your favorite towel and a good book.
 
How will you be celebrating Towel Day? Have pictures? Post them below!

Following the Target Store breach last December, eBay has been the next recent victim of data theft. While Target is still dealing with the fallout from its massive data breach, eBay has asked its 128 million active users to change their passwords. One thing that both of these breaches have in common is that when they occur, the victim comes under scrutiny and is always asked the question, "What were the security and response systems in place that allowed this to happen?"

 

These instances act as a wake-up call, and remind us that we need to do a reality check for disaster preparedness in our organizations. Do you have your guard up against attempts of unauthorized entities? Are the required security controls in place?

 

                                                                                    Dilbert2.jpg

One statement that stands strong for all security incidents is– Simple security best practices help you prepare for when things go wrong.


Based on the nature, diversity, complexity, volume, and size of an organization’s business transactions, it’s high time that companies take a proactive stance to counter unexpected cyber threats and breach attempts. Enforcement of these controls is not enough. It’s also mandatory to conduct continuous monitoring and improvements of security policies.

 

A key aspect of being prepared for bad times is to ensure that external regulatory standards are applied and continuously monitored for conformance. Companies like eBay that deal with credit/debit card transactions, must implement or revisit their existing PCI compliance strategy. To achieve compliance with externally imposed polices, ensure compliance with internal systems of control. Automation and use of software to continuously monitor and quickly configure network elements helps accomplish effective enforcement of security policies. This saves time and helps administrators stay watchful and ensure that the network is secure.

 

In spite of enforcing protective controls, there’s still no assurance that you will not be a victim of a breach. Given the ever evolving talent of cyber criminals, organizations, in addition to protecting their network and data, must have a feasible response system. Pointing to the recent Heartbleed bug incident, organizations should have a plan and the means to quickly recover and guard from incidents that are not in their control. It’s critical to have the necessary tools to implement the recovery plan without delays caused by factors like resource availability, device complexity/number, up-to-date device/network information, and so on.

 

Even more than the millions of dollars lost due to interruption in operations, organizations suffer the most on loss of reputation and credibility. The price paid for not being vigilant cannot be gauged and differs from industry to industry. Are you doing everything you can to avoid putting your organization at such risk?

Every other day, the Internet is flooded with reports of card holder information theft, financial data loss due to misconfigured ‘secure network environment’, identity theft and so on.

 

If you are in the financial services industry, how do you create a secure environment that is compliant with the Payment Card Industry Data Security Standard (PCI DSS)? To start with, the PCI compliance standard defines various merchant levels, validation types, and most importantly, PCI requirements (12 requirements) and hundreds of controls/sub-controls that ought to be followed to the letter.

 

According to a recent Verizon 2014 PCI Compliance Report, only 11.1% of companies passed all 12 requirements, and a little over 50% of companies passed 7 requirements.

pci.png

 

Hackers are upping their ante. Getting into the specifics of PCI compliance to protect financial data can be daunting, yet unavoidable. Well, the good news is, with a proper NCCM software, you can ensure that:

  1. Your network is secure and compliant
  2. You efficiently pass audits and avoid ‘last minute’ pressure (not to mention that unique combination of surprise audits & Murphy’s Law!), and
  3. You don’t contribute to the ‘cost of non-compliance

 

Cost of non-compliance: Costs incurred in terms of heavy fines (millions of USD) for regulatory non-compliance, and/or, losing financial data amounting to millions/billions of dollars.


Some inconvenient stats: Global card fraud – over $11 billion in 2012 (The Nilson Report, 2013). Losses from fraud using cards issued in Single Euro Payments Area (SEPA) is about €1.33 billion, in 2012 (Third Report on Card Fraud, European Central Bank).

 

Ensuring 100% PCI compliance in your network can be challenging due to one or more of the following:

  • Many routers, switches and firewalls – manually tracking configuration changes is a pain
  • Manually running cron jobs to backup configurations - time consuming/error-prone
  • Manually pushing configs via TFTP servers, to the network devices
  • Manually checking PCI requirements on a periodic basis, and apply changes as appropriate
  • Your existing software not supporting a multi-vendor environment
  • You don’t have visibility to what changed when, and by who
  • The current manual processes are outrageously laborious as you may have hundreds of network devices to manage, and too few network admins

 

Of course, all network admins try their best to ensure compliance and keep their networks secure, doing so in their own style. A few important things they may need, to better manage compliance would be:

  • Getting hold of readily available PCI reports
  • Having fine control over policies, reports and rules
  • Automating remediation scripts on a node or bunch of nodes
  • Change approval management
  • Backing-up/updating/restoring devices to compliant configurations when config changes go awry

 

The PCI DSS standard is here to stay, and it’s only going to get tougher and tougher to counter the rising fraud rates. So, how are you coping up in complying with PCI standards?

arjantim

That's the way, aha aha...

Posted by arjantim May 19, 2014

With a lot of customers running their infrastructure almost a 100% virtualized these days, I see more and more people moving away from array-based replication to application-based (or hypervisor-based) replication, with Zerto ZVM, VMware vSphere replication, Veeam Replication, PHD virtual ReliableDR and Hyper-converged vendors offering their own replication methods the once so mighty array-based replication feature, seems beaten by application-based replication.

 

Last week I went to a customer where Storage-based replication was the only replication type used, but their architects were changing the game in the virtual environment and application-based replication was the road they wanted to ride... Where virtualization administrators and most of the administrators saw a lot potential in application-based replication the storage administrators and a some of the other administrators were more convinced of what storage-based replication offered them for all these years

 

What about you?Do you prefer storage-level replication? or is application/hypervisor level replication the way to go?

Performance in virtual world is complex. But it's one of the most important topic as well! Every admin I meet puts performance on the first (second) place of his priorities. The other priority is Disaster recovery. Depending of the environments, admins usually prefers to deliver best performance, but cares about fast backups (and recovery) too. No one likes the users complains about performance, right?

 

Virtualization brought another layer of complexity - Performance bottlenecks.  In full virtual or mixed environments where some shops may also add another complexity with two different hypervizors, things might not be very simple, but rather complex to solve.

 

Usually there isn't only a single bottleneck as usually bottlenecks are followed with miss-configurations on the VM side. There are some principal miss-configurations that often repeat. Miss-configuration on the VMs side, and also some space waste the infrastructure side as well, which can lead to loss of performance. Here are few areas where to seek for improvements:

 

  • VMs with multiple vCPU when only single vCPU needed - start with less to add more later.
  • VMs with types of network adapters not adequate for the OS - check the requirements and follow the documentation (RTFM)
  • VMs with too much memory allocation - does that VM really needs 16Gb of RAM?
  • Storage bottlenecks - we all know that storage is the slowest part of the datacenter (except flash). It's changing with server side flash and different acceleration solutions, but also with hyper-converged solutions like VMware VSAN. But the problems might be on the storage network or HBA.

 

The virtual environment struggles also with the fact that sometimes in smaller shops there aren't much rules on who does what. The case when several admins, plus a developer team takes the virtual infrastructure for their playground. Everyone creates VMs which then sits there doing nothing, VMs snapshots laying around consuming valuable SAN disk space and suddenly there is not enough space on the datastores, backups stops working or VMs are having performance problems. There is a name for that - VM sprawl!

 

Here also, some order first please. Define rules who does what. Which VMs shall exists and in which situation is important to keep snapshot (or not). Think of archiving VMs rather than creating snapshots. There are some good backup tools for that.

 

Modern virtualization management tools can help in a situation when suddenly a performance is falling. Some of the tools integrate a possibility to detect configuration changes in the virtual environment. It means that you can see what changed the day that the performance started to be bad. It not always help as there are other factors that falls outside of the scope, and the tools might be monitoring the changes only of the virtual environment itself and not the outside physical world. But it can narrow the problem.

Performance forms the basis of measurement of any system, process, or solution that is performing an action. How good the performance of the system is tells us how effective and beneficial it is in meeting its intended purpose. Monitoring performance by measuring performance indicators is the right way to keep performance under control. Performance monitoring is not the same for all aspects of the IT infrastructure. A virtualization environment (VMware® or Hyper-V®, etc.) differs from a server or application environment in the context of performance management. There is more to monitor with virtualization: different entities, different metrics interfacing between the physical hardware and the guest operating system. What makes virtualization performance management difficult is that the guest OS is not seeing the physical hardware, and instead seeing the virtual hardware that is emulated by the hypervisor. To gain complete visibility, what you need is a VMware and Hyper-V monitoring solution that is aware of the virtualization layer and can distinguish it from the physical hardware and exactly pinpoint the source of an issue.

 

The performance of a virtualization environment is best measured by capturing a combination of VM and host statistics such as:

  • CPU usage and the time taken for a CPU to be available
  • Memory actively used, allocated and swapped by the VMkernel
  • Disk usage in terms of I/O, latency to process command issued by the guest OS
  • Network in terms of the number of transmit and receive packets dropped

 

In a virtualized environment, pinpointing the source of a problem can get complex as the symptoms can be caused by many different things. The very nature of a virtual environment – which is all about the sharing of resources by hosts and VMs and the flexibility to move VMs – can make troubleshooting performance problems more difficult.  For example, a host performance issue can cause the VM resource contention and performance bottlenecks. A VM performance issue may lead to slowdown of applications running on it.

To stay ahead of performance problems, it is essential to detect the underlying resource contention and the top resource consumers in the environment, and also to be able to spin up performance trends to get a forecast of where the virtual environment is headed in terms of resource utilization, capacity and workload assignment.

 

End-to-end and holistic virtualization monitoring – from host to VM to storage – is the best solution that helps get insight into performance metrics of your virtual infrastructure. If you need to get ahead of virtualization performance problems, you need to be able to receive real-time alerts pointing out irregularities, so you can take corrective actions immediately. In its essence, performance management is a must-have capability to constantly monitor the health of the virtualization infrastructure, identify issues in real time and prevent them causing virtualization performance bottlenecks and eventually application performance issues.

 

Read this free white paper to learn more about virtualization performance management.

Picture1.png

As a systems administrator you are often tasked with knowing a lot about server and application performance. Tools like SAM can give you insight to the overall server health, and AppInsight for SQL can show you some details about the activity querying an instance of SQL Server. But what do you do when you need to take a deeper dive into the database engine?

 

Join me next week, May 22nd, at 12 Noon ET for a live webinar titled "Optimizing the Database for Faster Application Performance". I will talk about general monitoring and troubleshooting best practices, how SAM can help identify servers that need attention, and how to use Database Performance Analyzer to drill down inside the database engine to find the root cause for slow query performance.

 

You can register for the event by going here:http://marketo.confio.com/DatabasePerformanceforSAMMay_RegPage.html?source=SWI (note: you need a valid business email address to register!)

 

See you then!

System Center is a great platform to perform application monitoring and management, but it contains a few gaps where native functionality isn’t provided. Fortunately, System Center is an extensible platform and SysAdmins can leverage 3rd party tools that quickly fill technology gaps and often plug into the System Center infrastructure to provide seamless visibility.  SolarWinds®, a partner in the System Center Alliance program, offers multiple capabilities to help SysAdmins fill functional gaps in System Center, including:

  • Non-Microsoft® application monitoring & custom application monitoring
  • Capacity planning for Hyper-V® & VMware® virtual environments
  • Third party patch management
  • Multi-vendor database performance
  • Storage & network management

            

Application performance management tools, such as Server & Application Monitor (SAM) offers additional visibility beyond the System Center. Moreover, it’s a tool that can be easily deployed with your System Center environment so you can seamlessly manage other aspects of your IT environment.

       

  • Monitor the Monitor: SolarWinds has worked with Cameron Fuller, Operations Manager MVP, to build application monitoring templates to monitor the health of System Center Operations Manager (services, Agent & Management Server) and System Center Configuration Manager 2012.  To get a detailed list of the services, apps, and TCP ports monitored, please see details in the Template Reference Document
  • Deep Microsoft application monitoring: Go deeper than just looking at availability and performance of critical Microsoft applications using System Center. Microsoft application monitoring tools like Server & Application Monitor (SAM) from SolarWinds provide a level of depth to manage issues in applications like Exchange and SQL Server® that System Center doesn’t provide. For Exchange environments, Server & Application Monitor provides details on individual mailbox performance (attachments, synced mailboxes, etc.). In addition, you can monitor Exchange Server performance metrics like RPC slow requests, replication status checks, and mailbox database capacity.   SAM also provides detailed dashboards for finding SQL server issues like most expensive queries, active sessions, and database & transaction log size. 
  • Non-Microsoft application monitoring: An IT infrastructure isn’t only limited to Microsoft and because System Center has limited capabilities to manage non-Microsoft applications, IT pros require management packs for non-Microsoft application monitoring.  SolarWinds provides the ability to quickly add application monitoring for non-Microsoft apps. Additionally, SolarWinds supports over 150 apps out-of-the-box.  With SolarWinds’ Management Pack, you can integrate application performance metrics right into System Center Operations Manager. 
  • Capacity planning for Hybrid Virtual Environments: In addition to having visibility into application performance, SolarWinds provides capacity planning for organizations with larger virtual server farms or a private cloud infrastructure. Whether it’s Microsoft Hyper-V or VMware, you get instant insights into capacity planning, performance, configuration, and usage of your environment with the built-in virtualization dashboard.
  • Contextual visibility from the app to the LUN: For applications that leverage shared compute and storage resources, Admins are able to contextually navigate from the application to the datastore and related LUN performance across multiple vendors in each layer of the infrastructure. SolarWinds supports over 150 apps, Hyper-V & VMware, and all the major storage vendors.  Integrated visibility is provided natively with SolarWinds Server & Application Monitor, Virtualization Manager, and Storage Manager.
  • Third-party patch management: Expand the scope of WSUS by leveraging pre-packaged third party updates for common apps, such as Java®, Abobe®, etc. A patch management tool, such as Patch Manager, which natively integrates with System Center, manages patch updates for Microsoft, as well as third party applications. In addition, deploy, manage, and report on patches for 3rd party applications. Further, get notified when new patches are available for deployment, and synchronize patches automatically based on a fixed schedule with this integration.
  • Database performance management: System Center offers limited visibility into performance of non-Microsoft databases from Oracle®, IBM®, Sybase®, etc.  Database Performance Analyzer is an agentless, multi-vendor database performance monitoring tool that goes deeper into monitoring the performance of databases. For example, it allows Admins to look at the performance problems most impacting end-user response time and provides historical trends over days, months, and years.
  • Network management: In the previous launch of System Center, Microsoft added limited network management capabilities. SolarWinds provides advanced network management capabilities to include network top talkers, multi-vendor device details, network traffic summaries, NetFlow analysis, and more. This information can be integrated to System Center Operations Manager with the free Management Pack

            

In conclusion, Admins should utilize a single comprehensive view of servers, applications, and the virtual & network infrastructure. Especially when all are integrated into System Center. In turn, you can look at the exact performance issue, easily detect hardware & application failure, and proactively alert for slow applications.

        

Learn how SolarWinds can help you by allowing you to gain additional visibility beyond System Center for comprehensive application, virtualization, and systems management by checking out this interactive online demo.

Bring your own device (BYOD) is not new to IT. Most companies are allowing employee-owned devices access on the corporate network. It may lead to cost-savings to the management and a fair deal of employee convenience, but, for IT, it is certainly an uphill task with a boatload of management headaches. While there are known security issues and lack of management capabilities due to increasing device endpoints (thanks to BYOD!), organizations are making their BYOD policies more stringent and pushing IT departments to ensure security, compliance, and optimized IT resource utilization.

 

What most companies have been overlooking with the BYOD trend is that, this has given rise to greater personalization and consumerization of IT services than anticipated. Employees are starting to feel comfortable using their own devices – which is fine – but they have also started leveraging shadow IT services wherein they are bringing into the enterprise network their own applications (aka BYOA), collaboration systems, and cloud storage. This practice is gaining popularity and is called bring your own IT (BYOIT) – which not only includes mobile devices, tablets and laptops, but a host of other IT services and third-party applications that IT teams have no control on. To make it more arduous for IT teams, BYOIT is happening from employee-owned devices which already IT has less visibility into. To name a few, employees are

  • Installing applications from the Internet to resolve issues
  • Leveraging third-party cloud services for data storage
  • Using insecure file transfer methods to share data
  • Connecting mass storage devices to enterprise workstations
  • Using antivirus and antimalware software that are not IT-approved
  • Using collaboration systems and instant messengers for communication outside the firewall
  • Utilizing network bandwidth for personal use – streaming videos and downloading apps

 

Besides the obvious security and compliance implications, there are many IT management headaches such as:

  • Maintaining inventory of connected devices and tracking BYOD assets to employees
  • Inconsistent IT approach to patch management and upgrades
  • Difficulty to enforce policies due to different device platforms and operating systems
  • Difficulty in handling IT tickets, and more time spent investigating issues and troubleshooting them
  • Difficulty to spread IT awareness as different users are using different devices and platforms

 

Do you face this challenge in your organization? If you are in IT, how do you control this situation and put reins on BYO-anything?

There is a technology available through most of the United States capable of providing net bit rates in the range of “terabits per second” and extremely low latency. Though big data enterprises like Google, Microsoft and Facebook are already using this for their data transfers, not many Fortune 500 enterprises have considered this. The technology is known as ‘dark fiber’.

 

What is dark fiber?

 

During the best years of the dot.com bubble in the late 90’s, telecom and other large utility companies foreseeing an exponential demand for network access laid more fiber optic cables than needed. One reason was, they expected a readily available network capacity will help capture the market. Another is, more than 70% of the costs of laying fiber optic cables goes towards labor and other infrastructure development1. It made sense to lay more fiber than needed to save on future labor expenses than laying new ones as and when needed. But two factors left most of these fiber unused:

  1. The development of Wavelength-division Multiplexing (I refuse to explain that, but you can read it up here) increased the capacity of existing optical fiber cables by a factor of 100!
  2. The dot.com bubble burst and the demand for network connectivity died down.

In fiber optic communication, light pulses are what carries information and so when transmitting data, the fiber lights up. Any fiber that is not transmitting data remains unlit and is called ‘dark fiber’. Today, the term dark fiber mostly refers to fiber optic cables that were laid expecting demand but is now not in use or was abandoned. There are thousands of miles of dark fiber2 available throughout the United States being sold or leased out by the companies that built them or purchased them from bankrupt telecoms.

 

Should you consider dark fiber?

 

Dark fiber is suitable for enterprises that need low latency, high speed connectivity with zero interference from service providers and has the capex to invest. Here are a few scenarios where dark fiber can help.

 

Point-to-point connections, such as those to network-neutral data centers, cloud providers, DR and back-up sites would do better with Gigabit or even Terabit transfers speeds. Fiber optic cables are capable of exactly that – a single pair of fiber can transfer Gigabits of data per second.

 

There are enterprises whose bandwidth speed requirements can change from a few Gbps to an unpredictably high limit. Optical fiber is capable of virtually unlimited data speeds allowing it to meet high and unpredictable bandwidth demands.

 

Enterprises, especially those involved in stock trading or online gaming and those using newer communication technologies such as HD video conferencing and VoIP need ultra-low latency connections which optical fiber is capable of providing.

 

Dark fiber also provides net neutrality. If you are purchasing Gold class QoS from your ISP for priority data delivery to your data center or branches, dark fiber needs none of that. With dark fiber, data is delivered over your privately owned cable and because of its high bandwidth capabilities, there is no need for traffic prioritization too.

 

Finally, you get the ability to transfer data from your offices to data centers or the cloud without having to worry about the data being read, modified or stolen. Dark fiber is your private connection where only you have access to both the data as well as the fiber that transmits the data.

 

And a few more facts:

 

Dark fiber is an excellent choice if you already have a router that supports fiber connections, thereby ensuring last mile high speed data delivery. But before you consider buying or leasing dark fiber, make sure you have a real business requirement. Here are a few more facts to consider:

  • Renting or buying dark fiber is cheap but you still need to invest in hardware and other equipment needed to light up the cables.
  • Optical fiber is a physical asset that needs maintenance. Consider the costs involved in maintaining the fiber and related infrastructure.
  • The time needed to identify and resolve outages is much higher than with Ethernet. But, on the other side, issues such as cable cuts happen very rarely with fiber optic cables due to the manner in which they are laid.

 

If dark fiber costs are prohibitive, consider alternatives such as ‘wavelength services’ where instead of the whole fiber, you lease a specific wavelength on the fiber based on requirements.

Still sounds like hype? Trust copper!

 

In IT we're used to having a buzzword every now and then. Most technicians just continue doing what they're good at and maintain and upgrade their infrastructure in a manner only they can, as they now how the company is serviced in the best way possible, with the available funds and resources.

 


As a succesor to the cloud buzzword we now have software defined everything. Software Defined DataCenter (SDDC), Software Defined Networking (SDN), Software Defined Storage (SDS) and so on. And although I don't like buzzwords, there is a bigger meaning behind Software Defined Everything.

 


  When looking at the present DataCenter you can see that the evolution of hardware has been very impressive. CPU, Storage, RAM and Network have evolved to a stage where software seems to have become the bottleneck. That's where Software Defined Everything comes into the picture, making the software that will use the potential of the hardware. My only point is that everything should be "Software" defined is bypassing everything the hardware vendors have done , and will continue doing, which is a tremendous amount of hard work, research and development to give us the Datacenter possibilties we have today. So naming it "software defined" is wrong if you ask me.

 


  Looking at Storage there is a lot of great storage software to leverage the just as great storage hardware. Looking at  some of the Solarwinds Storage Manager features like:

 

 

  • Heterogeneous Storage Management
  • Storage Performance Monitoring
  • Automated Storage Capacity Planning
  • VM to Physical Storage Mapping

You can see hoq software uses the hardware to provide the technician the tools they need to manage their datacenter, and giving the customer the IT they need. In future posts I would like to go deeper in some of these features, but for now I just wanted to share my thoughts on why software is an extension to hardware AND the other way around. You’re more then welcome to disagree, and leave your view in the comments.

 

See you in the comments

One of the cool function of any virtualization management suite is not only to have a nice dashboard showing which VMs performs poorly on which datastore/host, but also to be able to see what will happens if:

 

  • I create 10 VMs a week - in how many weeks/days I'll need a new SAN or new host(s) (storage capacity planning)
  • I add more VMs -  I increase my workloads by adding 20 new SQL server databases
  • Workload rise - I want to know the storage latency I'll have on my datastore if the overall workload increases 20%.

 

The prediction possibilities. If you could predict the future in your life you would probably be milionaire right? You know you can't.

 

However if you want to predict future in your datacenter? Yes, you can! For example by simulating the increase of the workflows you get the whole picture in 2 weeks, 2 months or more. Pretty awesome.

 

As an IT guy I usually visit existing datacenters so pure green fields are rare. Usually the datacenter, or the client has already some virtualization already in place. In this case I always use some initial workloads which are then taken into consideration to predict the future expansion.

 

A client of mine asked me recently on how many new hosts he will need to accommodate X new VMs and the grow for Y VMs a day. No longer guessing. The tools are here for such a tasks and if used correctly they give accurate results.

 

Did you tried digging through one of those tools or you continue guessing?

It comes as no surprise that cyber intrusions and data breaches are increasing in the industry today given the sophistication in the threat landscape. We have seen catastrophic breaches in the recent times with the likes of Target, Neiman Marcus, Michaels, and many more. While cybersecurity protection looks vulnerable for all sectors – be it retail, financial, manufacturing, or even the public sector – there is more concern for healthcare IT security and data protection. The number of security incidents are on the upsurge for healthcare IT, and according to the breach report by healthcare IT security firm Redspin, HIPAA data breaches have risen by 138% in 2013 over 2012.

 

FBI ISSUES SECURITY WARNING

In a private notice to healthcare providers, the FBI has issued a warning that the healthcare industry is not as resilient to cyber intrusions compared to the financial and retail sectors, therefore the possibility of increased cyber intrusions is likely.

While it’s a good thing that the FBI is focusing efforts to alert healthcare companies on their security policies, it’s also a worrying fact that the state of healthcare information security is in shambles. The recent Verizon DBIR 2014 considers data theft and data loss to be the most popular method of compromising cybersecurity in the healthcare industry in 2013.

 

Theft/Loss

Insider Misuse

POS Intrusion

46%

15%

9%

 

PATIENT HEALTHCARE INFORMATION – HIGH ON DEMAND

Experts say that medical information is becoming more sought to hack these days given the different things a hacker could do with this data.

  • It’s more difficult to find out that stolen healthcare information has been used when compared to financial records which the hackers use immediately to purloin money
  • Criminals also use medical records to impersonate patients with diseases so they can obtain prescriptions for prescription-only medication and drugs
  • Stolen healthcare patient data is supposedly costlier in the underground crime market in comparison with credit card records

  

Take a look at this lit from hhs.gov on all recorded healthcare breaches in the US: https://ocrnotifications.hhs.gov/iframe?dir=desc&sort=affected_count

  

The recent St. Joseph Health System breach revealed that the Texas hospital exposed up to 405,000 past and current patient records which also included employee and employee beneficiary information. Healthcare IT just keeps getting assailed by cyber criminals, and they are just not able to set up the right security defense for information security. Breaches don’t only impact the healthcare institution with the stolen records. There are other implications including compliance violation and penalties, reputational loss, and lawsuits which can take a bigger financial toll.

 

FINES FLYING HIGH

The New York-Presbyterian Hospital and the Columbia University Medical Center have agreed to pay the largest-ever HIPAA violation settlement, totaling $4.8 million, in response to a joint data breach report submitted by the affiliated healthcare institutions in 2010 that reportedly exposed the electronic protected health information (ePHI) of 6,800 patients. Earlier in 2011, Mayland-based healthcare-provider Cignet Health faced a massive $4.3 million fine HIPAA violation.

  

WHAT CAN HEALTHCARE IT DO TO IMPROVE CYBER DEFENSE?

Well, healthcare IT teams can start with risk assessment. If you are responsible for IT security in a healthcare organization, answer the following questions first:

  • What data do I store in my systems?
  • Do I know the existing vulnerabilities in my system?
  • What protection mechanism is in place?
  • Do I have governing policies to secure IT assets?

In case of a breach,

  • Do I have a threat/breach detection system?
  • Do I have a breach containment mechanism?
  • Do I have a response plan and automated remediation in place during a data breach?

If you are complying with regulatory requirements such as HIPAA and HITECH,

  • Have I reviewed HIPAA and HITECH guidelines and requirements?
  • Have I the right measures and policies in place in conformance with compliance norms?

  

Once you have understood the state of risk to IT infrastructure, find out cost-effective options to enhance cybersecurity and network defense. Read this free white paper to understand more about healthcare IT security and risk management.

Healthcare WP.png

The following is an actual description of a discussion one of our engineers, Matt Quick, had with a customer as told in his words.

 

A customer using the product evaluation copy of SolarWinds Server and Application Monitor (SAM) called us wanting an extension on the trial because “SAM was broke, keeps alerting a component when we know everything was fine.”  I asked to take a look at the customer’s historical data.  The component in question was actually from the Windows 2003-2012 Services and Counters, specifically “Pages/sec” was going critical.  I’d seen this before, and it always relates back to the disk.

 

“But this VM is backended by a NetApp!!  It can do 55,000 IOPS!!!”  Yeah, I was suspicious at that, so, I asked them, “ok, do you have Storage Manager (STM) or another storage monitoring product installed so we can check?”  Sure do, and he promptly informed me that NetApp’s Balancepoint was telling him that while he averaged about 860 IOPS per day, during that hour he spiked to 1350 IOPS, still well within his supposed “55,000” IOP limit.

 

Ok, so, I go into SolarWinds Storage Manager, hit search in the upper right, find the VM with the component in question.  I go to the storage tab and go into Logical mapping, find which LUN and aggregate it belongs to.  Next, I go into the NetApp, look at the RAID report to see how many IOPS he can do.  A quick calculation later, I estimate about 3500 IOPS total.  Customer then realizes the original number of “55,000 IOPS” probably is not real in his specific setup.  Then I look at the volume IOPS report on the NetApp, during the same timeframe.  Sure enough, March 1st @ 8:30 pm, 3,500 IOP spike.

 

“But Balancepoint says I got 1350 at that time!”  So, I ask him to open it up, and sure enough, 1350 @ 9pm.  I ask him to look at the next data point…800 IOPS @ 11pm.  He was looking at a bi-hourly aggregate.  Sure enough, if you aggregate the 8pm hour, you get 1350.  And we couldn’t figure out how to zoom in on NetApp’s software.  At this point the customer is speechless as he realizes his current tools were giving him incomplete information.

 

Then I ask him if Virtualization Manager (VMan) is installed, and sure enough it is.  I look in STM at which datastores are on that aggregate in NetApp, and I add all of them into a performance chart in VMan for the same timeframe and isolated it to a single datastore causing the problem.  From there I add all related VMs to that datastore, and boom, we found the culprit VM with the problem:  Apparently someone was running some kind of backup every day @ 8:30 pm.

 

All this from what looked like an ‘erroneous’ SAM alert.

 

This story exemplifies the value of an integrated set of tools that gives you visibility across the extended application stack, from the application and its processes and services through the underlying infrastructure so that you can identify the root cause and then solve hard problems. The following video gives an overview of how we are making this possible with the integration of Server and Application Monitor, Virtualization Manager and Storage Manager to provide extended application stack visibility.


If you have used SAM, STM, VMAN and Database Performance Analyzer to find the root cause of tricky problems or to prevent problems, please share your story (your story is worth a cool 50 thwack points)!

 

 

 

Filter Blog

By date:
By tag: