Skip navigation
1 2 3 4 5 Previous Next

Whiteboard

174 posts

In conjunction with SANS, SolarWinds conducted a survey of IT professionals on the impact of security threats and the use of security analytics and intelligence to resolve those threats.  We isolated the 120 government responses to get a sense of how analytics and intelligence are helping with the ever-increasing security challenges in the federal space.

 

Across the responses there was a commonality in uncertainty.  From truly understanding what the budget was for “information security management, compliance and response” (44 percent said unknown), to the number of attacks, to context around normal system behavior, to the roles needed in the organization, respondents agreed most on the lack of a standard.

 

What they do know is that security events happen. About 43 percent reported that in the past two years, their organizations experienced one or more attacks that were difficult to detect. Another 28 percent answered “unknown” to this question, continuing our theme of uncertainty.

 

Documented attacks take on average one week to detect. The three greatest blocks to discovering these attacks fall into the “we don’t know what we don’t know” category:

  • Lack of system and vulnerability awareness
  • Not collecting appropriate operational and security data
  • Lack of context to observe “normal behavior”

 

So, how is this problem overcome? With data of course! The data being used most frequently in the federal space to investigate security issues are:

  • Log data from networks and servers
  • Network monitoring data
  • Access data from applications and access control systems

 

In the next 12 months, respondents say they plan to begin using the following reporting data to enhance their security monitoring:

  • Monitoring and exception data pertaining to internal virtual and cloud environments
  • Access data from applications and access control systems
  • Security assessment data from endpoint, application and server monitoring tools

 

But as we all know, the more data you get, the more difficult it is to manage and make sense of it all. For that data to be effective, there needs to be a level of analytics. There is an even split between respondents saying that they correlate threat data using internally developed methods and those that say they do not correlate log data with external threat intelligence tools at all (43 and 42 percent, respectively). For those using analytics and analytic tools, the majority reported the biggest weakness was determining and measuring against a baseline.

 

What does this all mean? In order to get a handle on security threats, organizations must focus not necessarily on analyzing outliers, but on what the normal range should look like. Determining that baseline using monitoring tools and putting effort into correlating historical data with threat information will create more certainty and pay great dividends in being able to more quickly spot security events.  

 

Full public sector survey results are available by request.

SolarWinds, in conjunction with SANS, recently released the results of a security survey* of more than 600 IT professionals representing a broad range of industries and organization sizes. The survey was conducted to identify the impact of security threats and the use of security analytics and intelligence to mitigate those threats. 

 

Key Survey Findings:

Survey respondents generally agreed that support for managing security today was inadequate, with key impediments being lack of visibility to effectively detect and respond to threats, as well as limited security budgets.

 

Lack of Threat Visibility:

A majority of respondents expressed their need for greater security data visibility and context to identify and respond to threats faster.

 

Forty-five percent of respondents reported that in the past two years their organization experienced one or more advanced threats that were difficult to detect, with the average detection time being one full week (a lot of damage can be done in that time). Even scarier, 21 percent reported that they lacked enough visibility to even answer the question around whether or not they had experienced an advanced threat.


Top reasons cited for "difficult to detect" threats were:

  • Not collecting appropriate operational and security data
  • Lack of context to observe normal behavior (and set baselines)
  • Lack of system and vulnerability awareness
  • Lack of skills and training

 

To improve threat visibility and security intelligence, survey respondents said they plan to invest in better SIEM (Security Information and Event Management) tools and more security-specific training. But given the limited security budgets of many organizations (which we discuss below), will these "planned" investments end up getting pushed to the back burner?

 

Limited Security Budget:

IT departments today are having to do more with less. IT budgets have been shrinking, so it should be no surprise that respondents cited lack of budget as a key impediment to managing security.


Many of those surveyed indicated that they are working with limited budgets to properly manage “information security, compliance and response", with nearly half of the respondents reported spending 20 percent or less of their IT budget on security.  This is definitely a cause for concern given the ever-growing threat landscape and advanced nature of attacks.


The question then becomes--how do you maximize limited security budgets to improve threat intelligence and response?


Conclusion

Security is everyone’s problem. The responsibility of securing IT is not just the role of a security expert anymore, it’s important for all IT pros to be equipped to tackle security challenges. But, at the same time, keeping costs down will always be a driving factor for businesses. This is why it’s so important to invest in easy-to-use, affordable security management tools that don’t require a lot of time or budget to implement, but instead provide visibility and control right out of the box.

 

 

Check out this SlideShare to view more details on the survey results.

“The cloud” has had a positive impact on business environments by providing industry professionals with reliable and immediate access to the software necessary to perform their business functions. While the cloud is equally beneficial to technicians responsible for maintaining computer infrastructures, IT is still reluctant to fully adopt the paradigm. Here we will address some of the fears associated with cloud computing.

afraid-of-the-cloud-jonathan-lampe.png

Cloud Barriers and Concerns

The first issue IT has with the cloud is lack of control.  It's often not just an objection with the technology that is being managed outside the IT department, but the decision to adopt technology that circumvents IT in the first place. Through intuitive user interfaces and "self provisioning," cloud services have made it easy for non-technical teams to provision their own tools, and even make resource decisions that were formerly the sole domain of IT. So far it's working: research published last month found more than 60 percent of IT purchases are now being made by line-of-business employees.

However, the cloud discussion covers more than just organizational issues. There are also legitimate business continuity and risk concerns that make many technology leaders reluctant to adopt these services. The risk of data loss or theft, for example, is a prevalent issue among all industries. While conventional wisdom holds that the principle of least privilege should prevail, the "easy-to-subscribe, easy-to-share" model built into many cloud offerings means that there are often more people with access to critical information than there should be.

The issue is further exacerbated by the fact that cloud providers are not always clear about what services, protections, and insurance they provice in their contracts. According to Gartner, this makes it difficult to employ effective cloud risk management strategies. For example, for organizations that are unable to rely on providers for compensation during downtime, adopting the cloud carries with it an unnecessary and unwanted risk to their budgets. Other potential threats cloud service pose to organizations' bottom lines include:

  • Compliance fines if providers fail to meet regulatory mandates
  • Loss of customer trust if the provider's system is breached
  • New skills or extra labor time required to manage cloud systems


All this means that IT professionals must be proactive in addressing cloud-related risks and in implementing technology that line-of-business employees require to do their jobs. A recent audit of NASA's cloud computing deployment found that employees will likely turn to cloud services even in the most secure and well-educated environments - regardless of whether IT knows about it. In other words, rather than fight the cloud (and be ignored), IT should find ways of incorporating it in a controlled fashion, while using the paradigm to create their own business value.


How IT Can Embrace the Cloud - Slowly and Safely


The first step in reducing risk in the cloud is establishing exactly what the technology needs to do. Once this is known (perhaps after watching a few cloud subscribers in action), IT can play a vital role in creating policies that define the minimum protections a provider must have. For instance, companies that handle credit card data may need a policy that dictates the use of a provider that is compliant with the Payment Card Industry Data Security Standard (PCI DSS). Other factors to consider include:

  • The provider's policy for notification in the event of a data breach
  • The use of encryption
  • A system for measuring value to ensure return on investment
  • A clear use case for the cloud, related to specific features (file sharing, collaboration, storage, etc.)


As Gartner warned, IT decision makers should focus on their cloud contracts and push for clarity early on. This means establishing firm service-level agreements with provisions for total uptime as well as potential compensation if SLAs are broken. Furthermore, analysts recommended that the contract include a process to cancel service if the provider fails to meet expectations.

"Concerns about the risk ramifications of cloud computing are increasingly motivating security, continuity, recovery, privacy, and compliance managers to participate in the buying process led by IT procurement professionals," said Gartner analyst Alexa Bona. "They should continue regularly to review their cloud contract protection to ensure that IT procurement professionals make sustainable deals that contain sufficient risk mitigation."

It is also important to recognize that the cloud (at least the public cloud) is not always the best option when working with data that should never be left on 3rd-party servers. With advances in virtualization technology, it has become easier for IT to keep hold of the reins by deploying their own private clouds. This option ensures that the IT department stays in control. However, it is critical for IT to deliver on the cloud paradigm's essentials, including automated resource provisioning and accessibility, for this strategy to be a success.

The ease at which resources can be provisioned in the cloud makes it essential to incorporate usage and access monitoring tools so that the cost of storage, computing, and other resources does not spiral out of control. It also requires IT administrators to become familiar with classes of applications than can be deployed either on-premises or in the cloud, such as human resources packages, sales automation, and secure file sharing.   


Your Thoughts


Do you embrace the cloud, fear the cloud, or both?  Please tell us your thoughts and experiences in the comments below. 

More than 700 IT managers in six countries – Australia, Brazil, Canada, Germany, UK and USA – across the globe agreed unanimously that network complexity has impacted their IT role over the past five years.

 

In the North America, 92 percent of those polled agreed about this, while 85 percent did so in Germany and 80 percent in Australia, Britain and Brazil.

 

IT professionals in all six countries also are in broad agreement on the main technology, IT operations and business operations challenges driving network complexity: virtualization, BYOD and security.

 

Compute virtualization scored the most significant technology driver of network complexity for respondents in Australia, UK and Germany, in the IT Professionals Survey conducted by C. White Consulting for SolarWinds. IT professionals in Brazil and North America ranked virtualization as second, behind software-defined networking (SDN) and the introduction of smarter and more complex equipment, respectively.

 

Bring- your-own-device (BYOD) ranked as the top concern in IT operations drivers for Australia, North America and UK, while German IT pro’s opted for mobility and BYOD, and Brazilians for public cloud or Software as a Service (SAAS) and mobility.

 

Security was the main driver of network complexity in business operations in all six countries.

 

Take a look at our infographic covering the North American trends (click the link to download a full-size copy) and then dive into the worldwide data in the embedded slideshares below:

NetComplexInfoGraphic.png

 

 



 



 



 



 



We recently completed a survey on the impact and drivers of network complexity (detailed results can be found here where 80% of IT pros, based in the UK, indicated that increasing network complexity had impacted their role within the last three to five years. To any IT pro this is nothing new, but the top drivers of this network complexity, might catch a few by surprise.

 

We broke the drivers into three categories:

  • Technology
  • IT Operations
  • Business Operations

 

 

 

TECHNOLOGY

Within technology the top two drivers were compute virtualisation and smarter/more complex equipment. The third most popular which fell somewhat significantly behind the first two was video conferencing/telepresence. To me equipment complexity is interesting, it feels contradictory to the value proposition of smarter equipment – isn’t that supposed to make your life less complex, and save you time and money?  Perhaps the savings in hardware are being consumed by software and manpower costs required to manage the more complex gear.  Software Defined Networks (SDN) – the fourth most popular - was also an interesting trend given how early in the deployment cycle we are, it seems that many folks are projecting how they think complexity will be impacted by SDN.

 

 

 

IT OPERATIONS

On the IT operations side I was surprised that BYOD and mobility ranked #1 and #2 for adding complexity to the delivery of IT services.  The ubiquity of mobile devices and the rate at which they’ve penetrated the work environment is amazing and likely caught many IT organisations off guard. Of course when the driver for a new IT service like mobility is senior management it’s tough to say no, so many folks charge forward and figure out the operational details after the fact.

 

 

 

BUSINESS OPERATIONS

Finally from a business operations standpoint security is the standout complexity driver.  From day zero threats to SIEM, there’s no shortage of new things being added to the IT plate – and in case you didn’t know, security is every IT Pro and businesses’ concern, not just the security team!

 

 

 

GETTING AHEAD OF THE COMPLEXITY

By now you’re probably feeling a bit overwhelmed and understaffed, and it’s no surprise.  What is surprising is that the number one skill that is needed to address the challenges of network complexity is ‘understanding the business’, identified by nearly one-third of UK IT pros.  The context that business knowledge provides in making the right decisions is remarkable.  Throw all the technology at you that the industry can muster, but if you understand the business needs you can cut through the hype and predicted benefits and get down to brass tacks.  Shortly behind the business understanding is network engineering, and information security – in our connected world those responses that make a lot of sense.

 

So how to get at these skills? Training of course. But it seems that your management still doesn’t understand the value of training as budget and time were the top two barriers to getting the help you needed. This is a continuing trend, at a time when management will spend hundreds of thousands of pounds on hardware and software, spending the time and money to get trained right seems to require a two-thirds majority in congress.

 

So there you have it – the state of network complexity in 2013 – change is inevitable in the IT space and getting the training and development to stay ahead of the game is critical.

 

Got thoughts on this topic?  Let us know.


  

When I first started working in I.T., my toolbox was pretty thin. Actually, so were the MSDOS PCs that I was working on, so it didn't take a lot of tools to manage an 8086PC. My entire toolset fit on a single 360kb floppy disk. Life in I.T. has changed dramatically since that long-ago time, and today the toolbox of the ITPro is an indispensable thing. Also, to a great extent, that toolbox is no longer something we carry around with us, but rather capabilities and services that make doing what we do so much easier.

 

Recently I was asked to identify the Top 10 innovations that changed my life in I.T. I solicited a bit of help from some friends, and this is the list we came up with:

Infographic_10 IT Innovations That Changed Your Life.png

Government has always been a complex morass of differing ideals, morals and motivations…but we’re not here to talk about political nuances.  The “complexity” that we’re concerned with centers on the IT technologies that our government uses every day to serve citizens, from network infrastructure to application management and monitoring tools.

 

There’s no debate that IT networks are becoming more complex. We surveyed more than 100 government IT professionals to find out what is driving this complexity and what can be done about it.

 

For this survey, we defined network complexity as “the continuously growing, increasingly complicated nature of the network due to new technologies (such as SDN, virtualization, etc.) as well as the ever-increasing responsibilities placed on IT professionals from an IT operations perspective (by supporting new service offerings such as cloud, mobility, etc.) and business operations perspective (such as security or compliance).” Based on this definition, more than 93 percent of respondents said that increased network complexity changed their IT role/responsibilities within the last three to five years.

 

So, what is the driving force behind this complexity? Really there are three factors – Technology, IT Operations, and Business Operations.

 

On the Technology side, “Smarter Equipment” (meaning you used to need three pieces of equipment to do what a single piece of equipment can do today) was consistently ranked highest in terms of a technology driving complexity. Looking at IT Operations, this idea of “smarter equipment” continues to impact complexity with mobility and Bring Your Own Device (BYOD) both ranking high on respondents’ lists of areas that increase network complexity. Both public and private cloud were included in the possible responses and both received tepid, middle of the road responses in terms of their impact putting them on par with feelings related to Voice over IP (VoIP).

 

On the Business Operations side, IT professionals also are being asked to take on additional responsibilities to more directly support business operations. The primary responsibility impacting complexity is Security. Security far outpaced Auditing and Compliance in terms of its impact on network complexity among government respondents.

 

Given that equipment is getting smarter and IT professionals are being asked to do more, it is not a huge surprise that our respondents want to get smarter themselves. 73 percent said that training IT staff was key to being as prepared as possible for growing network complexity. Security and understanding the business were the areas that respondents ranked as the most critical for training over the next five years. With this critical need for training 42 percent say it is difficult for them to gain approval for training from their company.

 

A presentation is available here that highlights the full survey results in detail.

We recently completed a survey on the impact and drivers of network complexity (detailed results can be found here) where 92% of IT pros indicated that increasing network complexity had impacted their role within the last 3 to 5 years. To any IT pro this is hardly surprising news, but what is interesting is the top drivers of this network complexity.

 

We broke the drivers into 3 categories:

  • Technology – new technologies promise to make IT simpler/better but we all know that this doesn’t always work out.
  • IT Operations – IT is always being asked to deliver new services, sometimes based on new technology, other times based on old technology.  But every new service requires work to run it well.
  • Business Operations – IT and business?  Yes, it’s true – business needs do drive IT and some cause more pain than others.

 

Technology

Within technology the top 2 drivers were smarter/more complex equipment and compute virtualization. The third most popular which fell somewhat significantly behind the first two was Software Defined Networking (SDN).  To me equipment complexity is interesting, it feels contradictory to the value proposition of smarter equipment – isn’t that supposed to make your life less complex, and save you time and money?  Perhaps the savings in hardware are being consumed by software and manpower costs required to manage the more complex gear.  SDN was also an interesting trend given how early in the deployment cycle we are, it seems that many folks are projecting how they think complexity will be impacted by SDN.

 

IT Operations

On the IT operations side I was surprised that BYOD and mobility ranked #1 and #2 for adding complexity to the delivery of IT services.  The ubiquity of mobile devices and the rate at which they’ve penetrated the work environment is amazing and likely caught many IT organizations off guard. Of course when the driver for a new IT service like mobility is senior management it’s tough to say no, so many folks charge forward and figure out the operational details after the fact.

 

Business Operations

Finally from a business operations standpoint security is the standout complexity driver.  From day zero threats to SIEM there’s no shortage of new things being added to the IT plate – and in case you didn’t know, security is every IT Pro’s problem not just the security team!

 

Getting Ahead of the Complexity

By now you’re probably feeling a bit overwhelmed and understaffed, and it’s no surprise.  What is surprising is that the number one skill that is needed to address the challenges of network complexity is “understanding the business”.  The context that business knowledge provides in making the right decisions is amazing and the majority of you know it.  Throw all the technology at you that the industry can muster, but if you understand the business needs you can cut through the hype and predicted benefits and get down to brass tax.  Shortly behind the business understanding is network engineering, and information security – in our connected world those responses that make a lot of sense.

 

So how to get at these skills?  Training of course.  But it seems that your management still doesn’t understand the value of training as budget and time were the top 2 barriers to getting the help you needed. This is a continuing trend, at a time when management will spend hundreds of thousands of dollars on hardware and software, spending the time and money to get trained right seems to require a two-thirds majority in congress.

 

So there you have it – the state of network complexity in 2013 – change is inevitable in the IT space and getting the training and development to stay ahead of the game is critical.

Got thoughts on this topic?  Let us know.

The debate is not new: VMware versus Hyper-V… Will Microsoft’s free Hyper-V capability ever be able to catch up with VMware in the hypervisor race?  This discussion has been going on for years now, but it’s rarely really reached down into the daily lives of VMware administrators. For many reasons, the answer was usually an easy one: VMware.


Hyper-V wasn’t really ready for production and didn’t have the critical capabilities needed to run evermore-demanding workloads.  It lacked comparable vMotion capabilities, couldn’t match VMware’s Distributed Resource Scheduling functionality, and in particular was much more resource intensive.  Using Hyper-V was fine as long as the Windows team played with it in non-mission critical fringe activities or applications, but it never really rivaled VMware in the data center. 


However, the Hyper-V capabilities included with Microsoft Windows Server 2012 now change that discussion.  Many of the gaps have been eliminated or reduced, and in some cases, Microsoft has even jumped ahead of VMware.  And then there is the issue of price, as Hyper-V comes free with Windows Server 2012.  For a wide range of use cases, Hyper-V can be substantially cheaper than VMware.


In many instances, this new competitiveness is not making VMware admins happy.  Here are my Top 5 reasons why VMware admins don’t want Hyper-V to catch up:

 

  1. I know vSphere. I like vSphere.  Change is hard even if it is for the best.  VMware admins have been working with VMware hypervisor products for years, and so they know all the “ins and outs” of working with it based on years of experience.  Even if Hyper-V is equivalent, it would mean going back on a learning curve to figure it out.
  2. I don’t want the bean counters to tell me what to do.  Largely the decision to use Hyper-V is a financial one, not a technical one.  Few people are claiming the latest version has eclipsed VMware, the discussion is more around product parity.  That means someone other than the technical end-user is ultimately making the decision. 
  3. I don’t pay for it, so I don’t care.  Since admins are only indirectly affected by the cost of VMware licensing, this does not provide much incentive for making the switch. However, there is more potential impact than admins may realize here, as dollars spent on VMware licensing can eat up dollars for other things admins do care about—like extra servers or more storage.
  4. I don’t want Microsoft or the Windows team to own everything.  To be sure, VMware’s drive to develop and mature virtualization technologies brought a new balance to the server market where Microsoft was becoming more and more dominant.  Many people (admins included) simply don’t want to go back to the one-company-server-vendor world they’ve seen in the past.
  5. I’m not convinced it’s good enough.  While there is a lot of discussion about Hyper-V closing the gap with VMware, it is hard to translate that hype into feature/function reality of what that  means in day-to-day operations.  This is one area where SolarWinds can help (see below).


With the help of blogger and vExpert, Scott Lowe, SolarWinds is hosting an in-depth webcast (to be accompanied by a white paper from Scott) to discuss exactly how the latest version of Hyper-V 2012 compares to VMware 5.1, and what the resulting pro’s and con’s mean to the virtualization admin. The webcast is a free live event, so registration is encouraged sooner than later.

 

 

Webcast Event:  Hyper-V® 2012 vs. vSphere™ 5.1: Understanding the Differences
Featured Speaker:  Scott Lowe
Time & Date:   Tuesday, June 18, 2013 at 10 a.m. Central
Overview: The webcast will walk through a comparison of the scalability and features of Hyper-V and VMware hypervisors, including an in-depth look at:

• Architecture & footprint

• CPU & memory management

• Storage capabilities

• Mobility & availability

 

 

 

Register Here: https://www1.gotomeeting.com/register/472068609


Also, stay tuned for a companion whitepaper that will provide a detailed review of the both hypervisors.

The SMB market is considering virtualizing their servers more than ever. To ratify this trend, the recent State of SMB IT Report from SpiceWorks™ cites 72% of SMBs have adopted server virtualization in the second half of 2012, and this is expected to grow to 80% by the end of the first half of 2013.

 

Why this upward trend?

 

The obvious answer is SMBs are trying to get more performance and flexibility out of their existing server resources. Considering the rising cost of managing physical data centers, and adding more resources, and the impact of underutilizing server resources, virtualization is the best solution to achieve more out of less.

 

  • Virtualizing servers have left SMB IT teams to allocate resources wherever required and reclaim unused resources resulting in better resource utilization.
  • Server virtualization has also resulted in improved disaster recovery and high availability allowing admins to take VM snapshots at ease
  • A reduced server count also means a reduction in potential hardware failures and fewer physical servers for IT professionals to manage.

 

All this translates to ROI.

 

There is apprehension in the SMB server room about the cost of investment, virtualization being a new technology to venture in, the unidentified challenges that may spring up, whether the architecture is designed only for large enterprises, etc. Despite all these doubts, the growing demand of virtualization reinforces the fact that IT is evolving in the SMB space, and small- and mid-scale companies are coming forward with budgets to invest in newer technologies like virtualization and the cloud.

 

Virtualization is good for SMBs, right?

 

Yes, it is. Of course. No doubt.

 

Here comes the ‘but’ factor. With the adoption of new technologies like server virtualization, the data center foray opens up to various management challenges. Though management is generally easier with the virtual setup, it’s still a challenge for SMBs without the right equipment to help with monitoring the virtual infrastructure, alerting when something goes wrong, and managing allocation of CPU, memory, network and disk resources.

 

As companies try to leverage the cost benefit of implementing multi-vendor hypervisors, it becomes more difficult to get visibility into the heterogeneous virtual environment.

 

Get Visibility & Control

 

Using affordable and easy-to-use multi-vendor virtualization management tools, SMBs can:

 

  • Get VM to spindle visibility into VM, host, cluster and datastore performance issues
  • Identify the resource contention across your virtualized infrastructure
  • Diagnose storage I/O bottlenecks
  • Monitor VM sprawl issues
  • Perform VM rightsizing and capacity planning on your existing virtualized server environment to find how much more VMs can be added, and determine budget for adding more CPU, memory, network and storage resources
  • Use chargeback and showback in a virtual environment and help you control its growth, as well as track resource consumption

Virt.png

Virtualization is not a big risk for SMBs, and management will definitely not be a deterrent for as long as you have the right visibility into your virtual servers and take control of bottlenecks and performance issues to get the best bang for your buck.

 

Viva virtualization: SMBs and all!

Today, we released the results of a survey that looked into the adoption by UK and German SMEs of today’s key technology trends, specifically Cloud Computing and BYOD.  While the advantages of Cloud Computing and BYOD have been championed by larger enterprises, the study showed that SMEs are still yet to recognize and embrace its far-reaching benefits, despite standing to gain the most from these flexible and cost-effective solutions.

 

The study found that almost half of UK SMEs do not have a BYOD policy and worryingly those that permit it (24 per cent), said that it goes unmanaged by the IT department.  58 per cent of UK SMEs said that BYOD was not necessary for their business and 42 per cent considered it a security risk.  We think that there is a huge opportunity here for SMEs.  BYOD not only enables them to make significant cost savings, through reduced expenditure on supplying devices but can also lead to improved productivity and staff retention as they are familiar and comfortable using products of their choice.  Security concerns can be addressed through simple, clear policies.

 

When looking at Cloud Computing, over a third of UK organisations do not plan to adopt a cloud solution of any kind.  Thirty-nine (39%) per cent said that they do not trust cloud solutions, and a third said that they were unaware of the benefits Cloud Computing can bring to their business and saw it as irrelevant.  Again, Cloud Computing represents an opportunity for the SME as it enables them to be more flexible and responsive to market trends, whilst increasing productivity, decreasing costs and reducing the impact on the environment.

 

As the backbone of the UK economy, SMEs are under increased pressure to compete with global counterparts and operate within tighter budgets. The results of the previous survey in this series found that companies were investing in IT as a strategic business priority. However, a question needs to be asked as to whether they are investing in the most flexible and suitable IT solutions given the nature of the SME business.

 

Ultimately UK SMEs have the opportunity to differentiate themselves from the competition and adopt the right technology, such as Cloud Computing and BYOD, to not only serve them today but also grow with them in the future. This is the second in a series of surveys, which looked at the pressures facing IT managers today. We’ve also looked at which resources IT managers use to stay informed and up to date on the IT industry and whose advice they trust most when making IT software decisions.

 

For further details, and to see how this picture compares with businesses in Germany, check out the full survey here, or embedded below:

 

We’ve recently surveyed the IT community to get an understanding of their help desk software needs. Over 180 help desk professionals participated in the survey and provided insights on help desk software requirements and challenges. Below are some of the key results of the survey.

 

How well does your help desk solution manage ticketing functions?

chart1.png

Around 50% of the respondents feel that their help desk software does not facilitate efficient change/approval management and simple, flexible workflow.

 

 

 

 

How many help desk tickets do you process daily?

chart2.png

Over 30% of the help desk professionals work on more than 10 tickets each day.

 

 

 

 

Time spent on managing and tracking tickets

chart3.png

Over 48% of the respondents spend more than 1 hour just managing and tracking tickets.

 

 

 

 

What are the key help desk functions in organizations?

chart4.png

Knowledge base management, automation, SLA management, and performance dashboard management top the help desk requirements in organizations. In addition to these key functions, what if the help desk software could integrate with your network management software to trigger tickets automatically during node failures?

 

Help desk ticketing can be much simpler, streamlined, and efficient when you find a solution that incorporates each of these critical areas. What’s the bottom line?

Communication. Automation. Integration.

 

A full SlideShare presentation is posted below.

It contains the complete survey findings, practical implications, and our recommendations.

 

Regardless of making large investments in your network’s security, have you experienced the pain of service unreachability or network breach caused by a seemingly innocuous firewall rule change? Does preparing for an audit or cleaning up your rulebase seem like an impossible task? To top it all, do you really know if your firewalls are doing their job?


The fact is--the time and cost (direct and indirect) involved to make ACL changes really adds up, especially as your rulebase grows more complex over time. A firewall takes time to test and validate to make sure changes are not increasing security exposures or disrupting critical services.


Find how to make the case that firewall management is not really an insurance motivated security program (without measurable impact), but rather an integral part of day to day IT operations with the new SolarWinds Firewall Security Manager (FSM) ROI calculator. With the FSM ROI calculator, you can easily identify gaps in everyday management tasks that drain your operational effectiveness.

 

 

 

Factors influencing ROI

Process inefficiencies are present in every system. Steps must be taken to identify and bridge these gaps, as well as optimize running processes. The first step to achieving this is to recognize factors that directly or indirectly impact cost. This includes:

  • Number of - Firewalls, Firewall Audits
  • Number of - Change Requests, Connectivity Incidents
  • Average time spent - per Change Request, Troubleshooting Connectivity Incidents, preparing a firewall for Audit


All of the above, with the ‘Average loaded cost of an IT professional’ and the ‘Cost of a firewall’, directly influence costs incurred in running a functional firewall security system for your network.


How do these parameters impact cost?

A major share of network downtime is due to badly executed configuration changes caused by manual processes and human errors. Adding to network management woes is a cluttered rulebase with unused and redundant rules not only leaving your network open to attack, but further making security audits painful and tedious tasks to perform, as well as making compliance more difficult to achieve.


How can the FSM ROI calculator help?

With the ROI calculator, one can clearly see the bearing of each parameter/task on costs.


The ROI calculator can help you:

  • Determine, understand and address the tasks that take up most of your time and effort
  • Quantify your project value helps in quick decision making
  • Optimize currently running operations and utilize cost savings for other pressing requirements
  • Add value to your business


So, use this handy calculator to help make the case that a firewall management tool is a great idea--both for your security posture and your organization’s bottom line!


Prove how powerful firewall analytics, automated audits, rule analysis, change modeling, and built-in reporting convert to operational efficiencies, and in turn, major cost savings for your company.


Here’s announcing our latest – new and powerful SolarWinds Firewall Security manager v6.5!

With feature enhancements to change analytics for IOS and newly added Juniper SRX support, SolarWinds Firewall Security Manager v6.5 helps you simulate and predict how rule changes can impact traffic flow on the network, as well as further enhance your security and compliance. Try SolarWinds FSM for yourself and start simplifying firewall configuration management, reducing errors, increasing efficiencies, and saving money!


Head over to the SolarWinds Firewall Security Manager v6.5 product page to download a fully functional 30-day trial today!

Mike Thompson

When More is Less

Posted by Mike Thompson Apr 30, 2013

More friends, more money, more fun… “More” is generally considered a good thing. But there are plenty of times when “more” isn’t a good thing: more expensive, more difficult, more complex come to mind. The big hypervisor vendors have been focused on “more” in recent years. And in general this is good for consumers. More advanced storage capabilities – Cool! More powerful VMs – Nice! More advanced functionality – Good; even better if you actually use the functionality provided!  

 

The industry trend is that basic hypervisor technology is commoditizing quickly.  While core hypervisor functionality is still at the heart of customer demand, it is rapidly becoming table stakes. 

 

Despite efforts to leverage its dominant position with ESX® and vSphere®, VMware seems to have struggled to come up with an extension to its core capabilities that can sustain the same level of growth as they have had historically.  Given additional threats to its hypervisor base (with both OpenStack® and Microsoft®), VMware is still looking for the next blockbuster market: cloud, application development, operations extensions (e.g., monitoring, patch management). 

 

As the sale of the Shavlik® patch capabilities showed, extending into adjacent markets isn’t always easy.  VMware is now bundling add-on capabilities into “editions” that include more and more advanced capabilities. 

 

Microsoft, the master of the “no choice” bundle, is doing the same.  With Windows Server 2012® and System Center, you generally get everything and the kitchen sink, whether you will utilize it all or not.  The no-option bundle generally provides at least one “must have” component grouped with a bunch of lower priority components.  You put them together and reduce the price of the whole package, so that the price of the whole is less than what each piece costs separately.  That works just fine if you are a large enterprise planning to buy all of the pieces anyway.  However, if you are an SMB and/or just need a subset of the capabilities, this may cost more than what you’d pay if you had competitively priced á lá carte options or if you may not utilize all of the functionality provided in the bundle. 

 

Since this is not a new approach, most people will look critically at the upfront cost to decide what their viable options are and whether the “no choice” bundle is worth the initial purchase price.  But it is after the initial purchase where hidden costs may start to add up. 

 

We all know some products may be difficult to install and operate.  With companies trying to do more with less, many IT admins have to take on multiple responsibilities to keep the ship afloat.  If your only job is to manage one narrow area and you spend your entire day in one tool, you may like the cutting-edge features that require substantial time to learn, use, and maintain.  However, if you have to manage a variety of disparate tasks that require you to jump in quickly, figure out what is going on, solve it and move on to something else, then products can be a productivity and budget killer.  Bundled products include both easy to use and complex features, and if you don’t have time to spend on learning the idiosyncrasies of all of them, then you are not really maximizing the true extent of your investment.

 

The second hidden cost is maintenance.  While the purchase price may look great, the initial purchase price may not look so great when, a year later, you have to start paying maintenance.  Again, maintenance is valuable for products you need, but oftentimes, many of these add-on capabilities become shelfware. 

 

So, what is the answer? It is pretty simple if you have confidence that the software products you are selling will deliver on their promise of value, you would offer them all at an affordable price and you would let the customer buy just what they need. 

 

Hmm, that sounds familiar.

TiffanyNels

And So, It Ends...

Posted by TiffanyNels Apr 4, 2013

Our final battle was an epic match-up of Trekker proportion.

 

Q vs Spock…  Honestly, I am not 100% sure that Spock “won.” We are fairly certain our community decided that Q needed to “lose.”

 

PotAto, PotAHto…

 

Spock faced down Q in an all Star Trek Final Judgment battle, using his logic to take on Q’s omnipotence. Spock and Q each had to handle a murderer’s row of resurrected challengers, before they could get to each other:

-    For Q, this line-up was comprised of Neo, the Infinite Improbability Drive, the Borg and Number 6.

-    Spock, on the other hand, had to out think Darth Vader, the 10th Doctor, Kaylee and River Tam.

 

In the end, Spock stands alone and will be crowned the Intergalactic Champion of the SolarWinds Sci-Fi Bracket Battle 2013.

 

Live long and Prosper!

 

All judgments aside, we wanted to extend a hearty thank you to the members of our community – both new and old – who voted, debated, tweeted, etc in the bracket battle.

 

We had a great time and we hope that you did as well.

 

And while the Bracket Battle of 2013 has come to close…  we are already thinking about options for 2014.

 

What say y’all?  Take to the comments and let us know!

 

Should we stick with the lasers and light sabres?

 

Or, do we take inspiration from Super Smash Brothers and send Pac-Man into the ring against Alex Mason?

 

Lately, we have been contemplating how Gandalf might fare against the White Walkers…

Filter Blog

By date:
By tag: