1 2 3 Previous Next

Whiteboard

160 posts

Of the organizations surveyed (in the SolarWinds Email Management Survey, March 2014), over 80% are using Microsoft Exchange for their corporate email.  17% of the companies using Exchange have started moving to the cloud and are also using Office 365.  For organizations that have not or have no plans to move to the cloud, there are significant resources devoted to managing this application.  SolarWinds listened to our customers and built a solution to help admins improve Exchange uptime while reducing time to manage Exchange performance.  The latest release of Server & Application Monitor, v6.1, provides the following capabilities for Exchange 2010 & 2013 environments.

 

  • Consolidated visibility to historical mailbox database usage (all copies), regardless of multiple DAG transitions
  • Replication status checks
  • Quick views of dormant mailboxes and top mailbox offenders and drill into individual Exchange user mailbox details for troubleshooting
  • Real-time view of logs, processes, services
  • Monitor the end-user experience to discover patterns that might lead to poor service with round trip tests (MAPI, etc.).
  • Proactive alerting for related applications to include Lync®, ActiveSync® connectivity, and Active Directory® performance.
  • Server hardware health & virtual server performance for multiple vendors
  • Agentless – quick time to deploy and maintain

 

The benefits of having a single view of Exchange performance include:

  1. Better customer satisfaction.  When the help desk is informed of problems in the application, they can better respond to end users and say, “yes, the problem is in XYZ component and we are working to resolve the problem now.”  Help desk admins can also more quickly assist end users because they have all the relevant information at their fingertips to assist end users in reducing their mailbox size (# of attachments, size of attachments, synced devices, sent/received mail).
  2. Faster time to resolve messaging issues.  I spoke to a lot of Exchange admins last week at MEC.  Many were not only responsible for Exchange, but for related applications like Active Directory, Lync, and Active Sync.  About one-third of admins we spoke to said their Exchange environment was virtualized, so it was important to understand VM performance too.  Most of these admins were using PowerShell scripts to identify and troubleshoot performance issues.  This feedback was in line with our Email Management Survey which revealed admins commonly use multiple tools to manage email to include logs, Windows task manager, WMI and EMC/EMS with PowerShell.

 

In speaking with some of our customers, they expect to reduce time managing Exchange by 50% with these new features of Server & Application Monitor.  I encourage you to try it out for yourself!

Email is an application that is vital to business operations.  It’s been around a while and it’s not going away.  Despite email being one of the most important applications in the enterprise, there has been little innovation (with the exceptions of DAGs and SaaS/hosted solutions) to improve the efficiency and effectiveness of email availability—even as factors contributing to email management complexity have increased.

 

 

Is email really that hard to manage?

SolarWinds conducted an Email Management Survey (ending in March 2014) of 162 US and Canadian IT professionals with email management responsibilities.  The survey found that on average, 46% of companies have more than 2 FTEs (full time equivalents) dedicated to managing email.  In organizations with greater than 5,000 mailboxes, 49% employ 6 or more FTEs to manage email.  In addition, the survey found that 53% of time spent managing email is related to monitoring the email application.  For large organizations, that is a lot of people devoted to identifying and responding to problems related to a single process.  Financially speaking, 3 or more FTEs translates from a few to several hundred thousand dollars a year that could be spent on other IT projects that focus on the company’s competitive advantage.

 

Why is email so hard to manage?

Managing email is complex for several reasons. For example,  according to the survey, the prevalence of smart devices has increased the load on email services.  Respondents also believe that BYOx and mobility initiatives (like telecommuters) contribute to the complexity of email management.

 

In addition, administrators are more often using multiple tools to manage email applications. The survey revealed that 53% of respondents use 3 or more tools to manage email.  Many of these tools require scripting and assimilation of outputs into meaningful views using spreadsheets or PowerPoint charts. 

 

What are companies doing to reduce email management complexity?

SaaS providers and application hosting providers are attractive alternatives to on-premise application environments because cloud providers take on and hide much of the complexity. Today, the majority of organizations surveyed (74%) have not transitioned to cloud technology. However, 37% of respondents believe that within 3 years, their organization will transition to a SaaS-based application, and another 22% believe their company will make the move in the next 5 years.

 

1404_SWI_Email_Survey Infographic.jpg

Click here to download a PDF of this infographic.

 

What can organizations do in the meantime to reduce the amount of time and money spent managing email?

 

Check out the SolarWinds Email Management Survey presentation on slideshare.

The short answer, before we go into the stats and other reasoning stuff, is “a lot!” There’s very much for you in it with your increasing IT spend if you do it right, and if you do it in time.

 

The recent Gartner® Worldwide IT Spending Forecast has reported that global IT spending in increasing in various areas of IT. The role of IT in enabling business is being felt more and more, and top decision makers in organizations are coming to understand the value in IT investment more than they did last year.

 

THE SPEND TREND

Gartner states, worldwide IT spending is on pace to total $3.8 trillion in 2014, a 3.2 percent increase from 2013. While this spans across multiple IT areas, it’s interesting to see the growth rate for spending on enterprise software which is on pace to total $320 billion, a 6.9 percent increase from 2013.

 

While this is all positive outlook for 2014, the decision makers who are reading these numbers may wonder what they are going to get for spending more. Actually a lot! IT has evolved through the years from being a mere business accessory to an essential business enabler, contributor, and the need of the hour for business growth. Enterprise software has redefined how business functions operate adding more agility, scale, global connect and process automation. For those of you who understand only numbers – ROI in terms of cost and time savings. When you know what technology is suited for your business, and what short-term and long-term plans you have to implement the technology and sustain it, you would have mastered the strategy of output-driven IT spending. Even if you did decide to invest more in IT enablement, what should you be spending on? Selecting vital IT services is important.

 

IT infrastructure management is a key area of IT that has become inevitable in any network and data center environment. When you have network hardware, storage systems, servers and applications, there are bound to be failures and errors which only impact business continuity. IT management helps you get ahead of the issues, monitor your infrastructure proactively, and gives you visibility and control to fix the issues and eliminate the business impact. And also giving you the easy means to automate and simplify the way you do IT – delivering more operational efficiency and less manual overhead in problem-solving and fixing things.

 

IT infrastructure management does both:

  • Monitoring and alerting on infrastructure problems
  • Automating and troubleshooting infrastructure problems

 

Infrastructure management spans across various areas of IT including the network, servers and systems environment, storage and virtualized architecture, the security layer, and services for end-user IT support. Choosing to spend the wise way, when in the time of need, will help you gain quicker results of your IT investment.

  

HOW TO MAKE SURE YOUR IT SPENDING IS GOING TO BE WORTHWHILE?

  • Understand your requirement well and choose the right IT management technology for your infrastructure
  • Determine whether the solution is going to scale according to your growing infrastructure needs
  • Evaluate various automation and simplicity options that IT management solutions can offer you
  • Figure out how the ROI from your investment is going to come – problem-solving, failure detection, threat prevention, time savings, etc.
  • Ensure you don’t get into vendor lock-in or in a money pit with single-vendor solutions
  • Let the real end-users of IT management solutions, the IT guys – network admins and system admins – have a say in what you want to implement

 

It’s not about how much you spend to go with the spending trend. It about how wisely you spend on IT services in order to reap the full benefits of your investment while growing your business.

 

Read More:

Cybersecurity breaches in the government seem to be all over the news. (If you haven’t heard of Edward Snowden… well, he may know who’s heard of you – that’s all we’ll say.) The sheer number and wild variety of sources for these breaches led us at SolarWinds to wonder what federal agencies are really dealing with on a regular basis – are insiders leaking data? Are hackers stealing it? Who’s responsible and what can federal IT operations and IT security teams do to prevent these breaches?

 

We partnered with leading government research provider Market Connections to survey 200 IT and IT security professionals in the federal government and military on the top cybersecurity threats they face as well as what obstacles they have to implementing IT security strategies and what actions they are taking to remediate threats.

 

These survey results demonstrate that a broad and concerning range of cybersecurity threats plague government agencies with threat sources coming from careless and untrained agency insiders nearly as frequently as from malicious attackers and hackers from without.


whiteboard.png

While federal IT Pros face cybersecurity threats both from malicious outsider threats and internal ignorance, they must prevent and mitigate these attacks despite organizational issues and budget constraints. Finding the right software can provide much of the tech armor an agency needs to automate monitoring and thwarting of threats, but acquiring that technology has its own set of obstacles.


whiteboard2.png

Given the variety of cybersecurity threats and the unpredictability of human behavior, coupled with budget and organizational challenges, federal IT Pros must consider taking a more pragmatic and unified approach to addressing the availability, performance, and security of their infrastructures. By the “collecting once, reporting to many” theory, federal IT Pros can opt to use tools that address continuous monitoring of their networks, servers and apps across both their IT Operations and Information Security domains for maximum IT security coverage.


Full survey results:



Some software development processes omit the user from the design phase and leave product managers, marketing, sales, or ‘subject-matter-experts’ to drive the design, but what they are actually doing is guessing what users want. Co-designing with users, on the other hand, puts the users at the forefront of the process, ensuring that the final product is both usable and useful.

 

The benefits of co-design for a customer are clear: an end product that meets the user’s needs, is easy to use, and solves a real problem. From an engineer’s perspective, co-designing reduces the amount of throw away code by incorporating early feedback via simple prototypes and drawings of the software. And co-designing puts the product team in a position to learn from customers on a daily basis, hear their problems, and try to solve them.

 

At SolarWinds, talking with our users and putting them at the center of our product design process is what gets the user experience (UX) team up in the morning and fuels our engine. From asking our users for real life use cases that drive the design of a new feature, to conceptual walkthroughs with early sketches, and usability/usefulness testing with detailed prototypes, users participate at every stage of the design process and their feedback is vital to ensuring that our products help IT Pros get their jobs done.

 

jeff.jpg

Naturally, SolarWinds was excited to host fellow co-design advocate, Jeff Gothelf. Jeff is a designer himself and author of “Lean UX: Applying Lean Principles to Improve User Experience.” Jeff spoke last week at SolarWinds headquarters in Austin, TX and shared his experience building successful in-house innovation teams. Some highlights from the presentation include:

 

  • Remember, yesterday’s assumptions don’t work in today’s realities – assembly lines may work for tangible products, but they don’t work in software.
  • The end goal in an agile process isn’t well defined, a detailed spec no longer works, and innovation is best achieved through experimentation and learning.
  • Innovative teams are tasked to achieve business outcomes, not a pre-defined specific solution.

 

These principles of innovation are closely followed by SolarWinds product teams. We constantly learn from our customers and adjust our product plans accordingly – even once product development is underway.

 

A special thanks goes out to Agile Austin for making this fantastic event happen.

 

 

 

 

 

 

 

 

The UX @ SolarWinds team is always looking for new IT Pros to collaborate with on UX activities including co-designing, usability testing, and focus groups. If you would like to be part of this process, please email kellie.mecham@solarwinds.com.

 

SolarWinds is always looking for UX talent. If you are interested in becoming part of the SolarWinds UX team, and love working with IT Pros, please email annie.ficklin@solarwinds.com.

In the second of this two-part series, we talk about survey results obtained from thwack members on how they’re managing IP addresses in the age of BYOD and IoT. In the previous discussion, we profiled our responders and learned that the following three top-level capabilities are most important to IP Address Management:

  1. Monitor IP address usage and IP resource utilization
  2. Accurately provision IP addresses and automate routine configuration tasks
  3. Manage and maintain IP address documentation

 

We also learned that on average a network administrator spends about one day each week creating and maintaining IP address documentation, provisioning DHCP and DNS servers, and monitoring and troubleshooting IP resources.

 

Here, we’re going to take a closer look at what detailed capabilities network admins think are most important to IP address management. Because we wanted to talk about a large number of features, we decided to organize these into the following five categories:

  1. Troubleshooting and Incident Response
  2. Monitoring and Alerting
  3. Administration and Reporting
  4. IP Provisioning
  5. Task Automation 

 

Troubleshooting and Incident Response

 

Under the category of Troubleshooting and Incident Response, the most desirable feature was the ability to identify end-point devices connected to an IP address. On the other hand, the least important feature was the ability to kill a network connection to a device.

 

Monitoring and Alerting

 

Under the category of Monitoring and Alerting, the highest ranking feature was the ability to monitor addresses for accurate status (e.g., used/unused/transitory) for re-assignment or reclamation. In addition, the lowest ranking feature as the ability to identify BYOD/Mobile end-point devices.

 

Administration and Reporting

 

Under the category of Administration and Reporting, the most popular capability was the ability to view a change history (e.g., who changed what, when). The least popular capability as the ability to create specialized compliance reports.

 

IP Provisioning

 

Under the category of IP Provisioning, the most important capability identified was being able to find available IP addresses to use in subnets, virtual servers, and other applications.  The least important capability was being able to configure and use DNSSEC options.

 

Task Automation

 

Under the category of Task Automation, the most important capability was the ability to perform bulk backup and restore of configuration settings on DHCP and DNS servers while the least important capability was a “self-service” portal.

 

What We Learned about How IT Professionals Manage IP Addresses

 

Looking across all five categories, we identified five features which ranked at the very top for being most important. These are:

  1. Easily find IP addresses available to use
  2. Monitor and detect IP address conflicts
  3. Identify properties associated with end-point devices (like location, MAC address, OS, user, etc.)
  4. Easily search to find address and details
  5. Easily reclaim addresses no longer in use. 

 

We also observed a high degree of correlation between these detailed capabilities and the following top-level capabilities:

  1. Monitor IP address usage and IP resource utilization
  2. Accurately provision IP addresses and automate routine configuration tasks
  3. Manage and maintain IP address documentation

 

In our final discussion we’ll review what survey respondents said about the overall importance of IP Management in the larger context of network administration and the benefits IP Management brings to their organization. In the meantime, add to the discussion.  What silver-bullets would you like to have (or do you use) to manage IP addresses and supporting infrastructure?

 

The Importance of IP Address Management

 

Overall, our survey revealed that 88 percent of respondents agreed that IP address management is essential to overall network management. This same percentage also agreed that they need an IP address management tool to get the job done. Another 89 percent said they see value in using specialized IPAM tools, and 77 percent said they have business justification for purchasing such tools.

 

The Payoff of an IP Address Management Tool

 

In the final series of questions we wanted to know what operational benefits network admins observed by using a specialized IP address management tool. The most important benefit was lower risk and improvement to mean time between failures (MTTF). A good IPAM solution is essential because it can help proactively avoid configuration errors and device conflicts. The second highest operational benefit is closely related to being able to improve mean time to recovery (MTTR).

 

What We Learned From Respondents about IP Address Management

 

Industry experience has consistently shown that lower risk coupled with improved MTBF/MTTR directly translate to greater efficiencies and cost savings. These savings are further amplified with additional savings in labor costs. When these cost reductions are combined they provide strong financial justification for acquiring an IPAM solution by delivering a payback period measured in only a few months. 

 

Perhaps the most compelling evidence for IP address management is how it helps network administrators deliver reliable IT services in spite of every-day challenges like creating, migrating or reconfiguring subnets, maintaining high-availability of DHCP and DNS services, and ensuring reliable operations through monitoring of critical resources and proper forecasting and planning. 

 

Your peers have spoken. The insights they provided are invaluable because they can help us learn and improve. We hope you find this survey helpful to that end. What are your thoughts?  Do your experiences align with the findings of this survey?

 

 

2014 has just begun, and yet it’s already shaping up to be the year of the Internet of Things (IoT). Everywhere you look you see articles about the explosion of virtual, clouds, and BYOD and you read about all the ways to benefit from new smart devices and sensors like thermostats and smoke detectors. Because we’re all rooted in the practical, we wanted to see how all of this is affecting you on the front-lines. Ultimately, the Internet of Things means more work (and job security) for an enterprising network administrator—more configuring, monitoring and troubleshooting. So it’s in this context that we launched a survey to learn how people approach IP address management in the age of BYOD and IoT.

 

The survey was conducted between December 9, 2013 and January 13, 2014 in our thwack community. Our goal was to understand IT professionals’ point of view on 13 questions organized under four major sections. We wanted to learn how they define IP address management and what tasks they typically engage in. We also wanted to know how much time they spend on managing IP addresses and what tools and capabilities they prefer. In total, we had 195 responses.

 

Profile of Survey Participants

 

Most respondents said they managed large networks. Nearly 1/3 said they manage more than 5000 IPs. About 1/4 of those surveyed said they represented organizations with between 500 and 2000 IPs. Then around 1/5 said they manage 2000 to 5000 IPs and the remaining group managed up to 500 IPs. The most common role was in Network IT Operations with the top three job titles being Engineer, Administrator, and Manager.

 

What Does IP Address Management Include?

 

Next, we asked, “What does a good IP address management solution need to offer?” We presented participants with a number of top-level capabilities and asked them to prioritize their answers as “must have,” “nice to have,” and “not needed.” The highest priority (must have) top-level capabilities are:

  1. Monitor IP address usage and IP resource utilization
  2. Accurately provision IP addresses and automate routine configuration tasks
  3. Manage and maintain IP address documentation

 

How Much Time Do You Currently Spend Managing IP Addresses?

We asked the respondents to cite how much time they spend managing IP addresses across a variety of activities including maintaining and publishing IP address documentation, managing DHCP and DNS configurations, monitoring and troubleshooting IP resources, and more. The weighted average across all participants was 38 hours each month, and 49 hours each month among administrators with more than 5,000 IPs.

 

What We Learned

We observed a couple of key concepts we learned from these initial questions. First, managing IP addresses is something not limited to large networks. Administrators working with networks of all sizes are impacted by IP addresses and infrastructure management. Second, while we intuitively understand the problems caused by managing IP addresses manually using spreadsheets, we cannot overlook the larger and related challenges associated with managing the entire IP infrastructure consisting primarily of DHCP and DNS servers.  Finally, we were surprised to observe that IP management consumed so much time.  An average of 38 hours a month is roughly equivalent to one person spending a full day each week. Translating this into dollars, we see that this equals about $23K annually or approximately a quarter FTE.

 

But we’re not finished yet. Join us again in the second of this two-part series series as we explore what features admins think will simplify IP address and DHCP and DNS administration in the age of BYOD and IoT. Until then, tell us if your experiences coincide with these survey results. How much time do you spend managing IP addresses, DHCP and DNS systems, and monitoring and troubleshooting IP resources and issues? What makes this so difficult and time consuming?

 

 

What does the Windows XP rundown mean?

  1. Microsoft will no longer provide technical support for issues involving Windows XP, unless the customer has purchased a Premier Support Services contract.
  2. Microsoft will no longer provide security updates for the product, or anything associated with the product (for example, Internet Explorer 6).

 

Why is this important?

According to NetMarketShare, about 29 percent of the Internet-connected systems today are still running Windows XP, and there’s no universal agreement that this number is going to change in the next 90 days. Here’s a recap of the things IT pros need to consider:

  • Cost. It’s a no-brainer that switching to a new OS produces significant costs in terms of time, money and personnel, from up-front infrastructure cost to time spent training and educating end users. IT needs to look at the cost/benefit of replacing every Windows XP system with a new OS such as Windows 7 versus the cost of maintaining Premier Support Services. If neither of these options seems acceptable, one has to also consider the cost of remediating continual malware infections, potential loss of data due to socially engineered attacks or complete loss of use due to corruption or destruction as the result of malware. More on that to come. This is a scary thought given the fact that this is most likely what’s going to actually happen in many organizations; Windows XP systems will keep on going, but without any new updates.
  • Security. The end of support for Windows XP also means the end of XP patches. So, any security flaws not fixed after April 8 will almost certainly be exposed and exploited. Another security challenge facing XP goes beyond the OS itself, and has to do with the fact that there may not be a secure browser for use on the machine. Microsoft did not build IE9 for Windows XP. What remains to be seen is what Mozilla and Google do with respect to versions of Firefox and Chrome for XP beyond April.
  • Application availability. Some software vendors haven’t been prudent in updating their software applications to run on newer operating systems. In many cases, they’re 32-bit applications originally written for Windows 95 or 98 that still “play nice” on a Windows XP system, but require local administrator privileges to run. Newer operating systems, however, are not compatible with these applications. Businesses continue to depend on those applications and the operating system they run on. (We should note that with Windows 7, Microsoft introduced “Windows XP Mode,” which is a virtual environment running on Windows 7 Enterprise and Windows 7 Ultimate. However, many of the current issues with Windows XP support will also impact the “Windows XP Mode” environment, though possibly to a lesser degree because “Windows XP Mode” is not typically where the Internet activity is executed.) Nonetheless, organizations are caught between software vendors that aren’t updating their applications to run on newer operating systems, and Microsoft enabling this practice by providing Windows XP support within new operating system—until April 8, that is.
  • Organizational and end-user buy-in. For any IT organization that’s already experienced the pains of transitioning to VoIP, VDI or even installing Microsoft Office upgrades, the thought of introducing, selling and training leadership and end users on an entirely updated OS sounds like the opposite of fun. It’s crucial to get organizational and end-user buy-in, even when the reasons for making the change are entirely valid and will inevitably leave the organization better off.  

 

Additional References

For more information, we found the following articles helpful:

 

Be sure to weigh in on this issue over on the General Systems & Application forum here: Coming April 8: Windows XP Rundown. We’ve also got a XP rundown-related poll running that we invite you to participate in here: XP Support Coming to an End.

It’s no surprise shifting expectations from employees, who want to use consumer devices and apps in the workplace, are leaving a permanent mark on business processes. As we’ve seen from the respective studies, more businesses recognize that IT holds an important key to success, and IT pros need to be prepared to take on new levels of responsibility within the business strategy.

 

It's clear the days of IT pro’s limited impact on business are long gone. As Suaad Sait mentioned in last week’s post, the day of the old IT is behind us – bring on the New IT!

 

On that note, let’s take a closer look at the UK, Australia, and Germany survey key findings. Over the coming month, we’ll see further research from Brazil.

 

UK survey findings

 

The evolution of technology means that IT professionals must adapt their skill-sets and levels of responsibility in order to cope with the demands of emerging and disruptive technologies.

  • One in four of those surveyed suggested that BYOx is the emerging technology that is most disruptive to business.
  • Mobility, cloud computing, data analytics and compliance round up the top five emerging technologies.
  • Over 50 per cent of respondent suggest that cloud/SaaS and information security are the top IT skill-sets that will be in high demand over the next three to five years, followed by mobile apps and device management.

 

Increased infrastructure complexity also means disruptive technology and new IT skills are required to effectively manage networks and systems.

  • More than 50 per cent of IT pro’s view information security and cloud as the top IT skillsets that will grow in demand over the next three to five years.
  • BYOx ranked first when survey respondents were asked which emerging technology is the most disruptive to business.

 

UK1.png

 

Modern IT professionals are now expected and must be prepared to help their companies make informed, strategic business decisions with regard to emerging technologies.

 

  • 97 per cent of all IT pros are confident that they can provide the guidance and expertise necessary to help their company make informed decisions with regards to emerging technologies
  • While 97 percent of survey-takers said they feel at least somewhat confident in their ability to provide such advice, only one-third of those are completely confident in doing so
  • To feel more empowered to provide strategic advice, slightly more than half of respondents said they need more training in their area(s) of responsibility, and nearly 40 percent said they need a better understanding of their company’s overall business

 

UK2.png

 

Technology’s rise in importance as a core business component may have only been outpaced by the complexity it created. This increasing infrastructure complexity has affected the role of nearly all IT professionals.

 

  • Over half (53%) of all IT departments now manage virtualisation, mobility, compliance, data analytics, SDN/virtual networks, BYOx, cloud computing and self-service automation
  • 40 per cent of respondents said increasing complexity has greatly affected their responsibilities over the past three to five years, and an additional 49 per cent said it has somewhat affected their role
 

You can find the results at the following URL:  http://www.slideshare.net/SolarWinds/solar-winds-newitdriversrolesresearchuk120513final/1

 

Australian survey findings

 

IT’s role in strategic business decisions

  • Ninety-nine per cent of respondents said they are given the opportunity to provide the guidance and expertise necessary to help their company to make informed, strategic business decisions in this area.  But the majority (70 per cent) have the opportunity to do so only occasionally, or rarely.
  • Ninety-six per cent said they feel at least somewhat confident in their ability to provide such advice, while 44 per cent said they are completely confident in doing so.
  • To feel more empowered to provide such advice, almost half of respondents said they need more training in their area(s) of responsibility.  Nearly 44 per cent said they need a better understanding of their company’s overall business.

 

AU1.png

 

Demand for new skillsets

  • More than 50 per cent of those surveyed said cloud computing and information security top the list of IT skillsets that will grow in demand over the next three to five years. Business analytics followed.
  • Respondents said information security is the IT role that will need to adapt the most to evolving technology over the next three to five years.
  • Cloud computing ranked as the most important technology for businesses to invest in to remain competitive for the next three to five years. Mobility, data analytics, virtualisation (server or desktop), self-service automation and BYOx followed suit, in that order.

 

AU2.png

 

Other findings

  • Over half of all IT departments now manage virtualisation, mobility, compliance, cloud computing, BYOx, SDN/virtual networks, data analytics and self-service automation.
  • Forty-seven per cent of respondents believe that increasing complexity has affected their responsibilities greatly over the past three to five years, while 42 per cent said it has affected their role somewhat.

 

 

 

Or find the results at the following URL:  http://www.slideshare.net/SolarWinds/solar-winds-newitdriversrolesresearchaus121113final

 

German survey findings

 

IT professionals were asked how the role of IT has changed in Germany over the past five years, and what requirements must they meet to keep pace with technological developments. Their responses show that the IT landscape has undergone major changes in recent years.

  • The IT infrastructure of German companies is becoming increasingly complex due to new technologies such as virtualization, mobility and cloud. Accordingly, the role and responsibilities of the IT professionals has also changed and adapted to the technical developments in the past three to five years. This was confirmed by 97 per cent of the respondents.

Germany_1.jpg

  • Almost half (45 per cent) of those surveyed indicated that they need more training in order to look after their responsibilities adequately. Only one-third (36 per cent) believe that their professional skills are sufficient. A further third said they need a better understanding of business processes.

Germany_2.jpg

 

 

 

SlideShare: http://www.slideshare.net/SolarWinds/new-it-survey-germany

 

Brazilian Survey Findings

 

Brazilian IT pros have increasing, but still limited opportunities to help their companies make informed, strategic business decisions.

  • Over 30 percent of all IT pros define the “New IT” environment as a broader partnership/better relationship with the business.
  • One hundred percent of respondents said they were given opportunities to provide the guidance and expertise necessary to help their company make informed, strategic business decisions with regard to emerging technologies. Some are more confident in this involvement than others:

brazil1.png

To feel more empowered to provide business advice, nearly 60 percent of respondents said they need more training in their areas of responsibility and half said they need a better understanding of their companies’ overall business. Where should Brazilian IT pros focus that training? Most say they will prioritize cloud and SaaS management.

 

brazil2.png

 

 

Most IT pros face a number of challenges managing and modernizing their IT infrastructures – budgets, bandwidth, and bosses often hinder progress – and IT pros in government often face these challenges at an even more exaggerated level. However, SolarWinds has noticed that many public sector IT pros are addressing these challenges by automating technologies in their IT infrastructure. We set out to learn how the automation is going so far. In our recent survey of 162 IT pros from federal and state/local government, we learned the importance of automating technology and the restrictions and red tape that often get in the way of that progress.

 

Key Findings:

 

Where are federal IT pros in the automation process? Some have yet to automate anything, but most are somewhere in the process of evaluating technologies, implementing them, or have already completed implementation. In fact, more than two-thirds of survey respondents said they are already in the process of implementing a variety of technologies and 63 percent of respondents are planning an automation project during 2014.

 

img1.png

 

Federal IT pros who have automated some or all of their information technologies have already begun to realize real ROI from their automated IT deployments. More than 84 percent of survey respondents said the automation of information technologies in their IT infrastructures was a time- and money-saving investment for their teams, and 67 percent of respondents have seen increases in their teams’ productivity as a result of investments in automation.

 

img2.JPG

The automation tools that provide the most overall benefit in terms of time/money saved are:

• 58.3% Network Configuration Management

• 41.7% Help Desk

• 38.8% IP Address Management (including IPv6)

• 36.6% App/Server Provisioning/Config Management

• 23.7% Storage Management

• 22.3% Virtualization Management

• 20.1% Patch Management and Compliance Reporting

• 18.0% Business Process/Work Automation

• 9.4% Log Management

• 7.9% Mobile Device Management

 

So what’s the holdup for the others? And why aren’t IT shops automating everything? As always, lack of budget and lack of training play a part.

 

jpg3.JPG

 

Even with these roadblocks, though, IT pros in the public sector continue to recognize the importance of streamlining IT. With the breadth of IT management software vendors available, it’s now up to Federal IT pros to identify the most pressing challenges in their IT infrastructures and to find the right automated technologies to simplify those challenges. Luckily, with such strong evidence that automation saves time and money and increases productivity for government organizations, the case to automate is pretty clear.

 

Full Survey

 

Or find the results at the following URL:

Automation in Public Sector IT Systems

No one said it better than the great Bob Dylan. Indeed, for IT pros, “the times they are a-changin’.”

 

That is by far and away the prevailing message from the results of our recently conducted New IT Survey, which provides an in-depth look at the evolving role of IT, including its drivers, needed skillsets and how IT pros view their role within what can be called the new IT environment.

 

As technology has grown in scale and power, influencing the way we work, play and communicate, the role of the IT professional has become increasingly important. The day of the old IT is behind us; we’re now in an age where IT pros are more than simply a technical resource, they’re heroes. Indeed, behind every successful business is a team of great IT pros.

 

However, as the survey highlights, being a hero isn’t easy. Today’s IT pros are universally affected by increasing infrastructure complexity, and the need to adapt to disruptive technologies and acquire new IT skillsets are the name of the game. Not to mention the fact that IT pros are now being called on to step outside the comfortable confines of their technical world and help their companies make informed, strategic business decisions.

 

Without further ado, let’s take a closer look at the survey’s North America key findings. You can check out additional results across the globe here.

 

North America Key Findings

  1. Technology’s rise in importance as a core business component may have only been outpaced by the complexity it created. This increasing infrastructure complexity has affected the role of nearly all IT professionals.
    • Ninety-four percent of IT pros agree that infrastructure complexity has affected their role over the past three to five years
    • Over 50 percent of all IT departments now manage virtualization, mobility, compliance, data analytics, SDN/virtual networks, BYOx, cloud computing and self-service automation
  2. Among the results of this IT evolution, IT professionals are now expected and must be prepared to help their companies make informed, strategic business decisions with regard to emerging technologies.
    • Ninety-nine percent of IT pros are given the opportunity to at least occasionally provide guidance to help their companies make strategic business decisions with regard to emerging technologies’
    • Ninety-five percent of IT professionals feel at least “somewhat confident” in their ability to provide such advice; though only 1/3 are completely confident in doing so
      1.png
  3. Increased infrastructure complexity also means disruptive technology and new IT skills are required to effectively manage networks and systems.
    • More than 50 percent of IT pros view information security and cloud as the top IT skillsets that will grow in demand over the next three to five years
    • BYOx ranked first when survey respondents were asked which emerging technology is the most disruptive to business

          2.png

 

Full North America Survey

 

Or find the results at the following URL:

https://www.slideshare.net/SolarWinds/solar-winds-new-it-survey-full-results-north-america

The storage admin doesn’t always have the most glamorous life.  They typically have responsibility for all the storage arrays and infrastructure supporting the most critical systems for the company, from virtualization, to applications and databases. A lot of people get upset when there is a storage problem.  There has also been a shift over the last few years. As the cost of storage has dropped, the primary storage issue has shifted from disk space to the storage I/O – essentially how fast data can be read or written to the storage media. This becomes critically important for application performance especially in the high performance world of databases and I/O intensive activities like virtualization.

 

The other key trend is around virtualization and flexibility of application infrastructure.  When most applications were hard-wired to physical servers and the servers to dedicated storage, storage load was much more predictable and steady. Now with things like vMotion and high performance VMs capable of running large workloads like virtualized databases, I/O can change quickly.

 

That is where the Clark Kent analogy comes in.  The storage admin doesn’t always get the most respect (maybe Rodney Dangerfield would be better). Even though they can be the critical link in application performance, they are often not consulted before changes occur in the virtual or application environment.  Things are often just changing too fast.  So while they have responsibility for an increasingly mission critical resource, storage I/O load changes often come at them out of the blue with little warning or interaction for other teams.  Don’t worry; they still get the blame when the storage systems result in application performance problem.

 

But there is a potential phone booth on the horizon for storage admins that could facilitate a quick change into their alter ego.  Many of the new storage technologies, especially solid-state disk (SSD), have the potential to turn the storage admin into Superman, saving the day for multiple other teams including the virtualization, application and database teams. SSD has radically better storage I/O performance than traditional spinning disk but is very expensive by comparison.  Due to the cost, blind investment if SSD can result in a large expense without a corresponding payback.  As a result, we continually hear IT users asking “How do I effectively use SSD in my environment?” Given how important that question is becoming, we worked with George Crump and the team at Storage Switzerland to try to provide some additional guidance about how and where to use SSD to get the most benefit from the investment.  The first article titled “How Do I Know My Virtual Environment is Ready For SSD?” was posted January 7.  An additional article on using SSD to enhance database performance will be coming soon. 

 

As new storage technologies like SSD become more mainstream, it will be even more important that storage admins in all kinds of IT environments have the ability to gather the right data to determine how much and where to use SSD to optimize the investment.  Hopefully then the storage admin can spend more time feeling like Superman and less like Clark Kent.

The last blog (Will Cisco’s Insieme Networks Acquisition Rain on VMware’s NSX Party? – Part 1) spoke about how Cisco’s network hardware market dominance is being threatened by a number of factors with the most prominent ones being SDN and VMware’s network virtualization solution called NSX. Let’s take a look at Cisco’s answer to these threats.

 

Today’s  frenzy is about Cisco’s acquisition of its own spin-in startup Insieme Networks , which is billed as the company's answer to the combined market threats of commodity hardware, SDN-enabled legacy hardware and network virtualization from VMware in the form of NSX.

 

Insieme – SDN solution or SDN killer?

The Insieme answer named Application Centric Infrastructure (ACI) and Application Policy Infrastructure Controller (APIC) is hardware based and application-centric, unlike VMware NSX that is software-centric, providing Cisco’s counter-argument to the NSX software-first approach. Cisco seems to be betting big that the Insieme technology not only marginalizes the irritant of SDN, but can keep NSX in check as well.

 

Interestingly the Nexus 9000 switch includes both custom silicon ASICs and commodity silicon. Open standards-ish support for limited SDN capabilities are provided like the separated data and control plane, OpenFlow, or even Cisco’s own onePK. That should help maintain the loyalty of admins who love Cisco and want SDN.

 

Cisco’s ultimate answer to SDN is called ACI with APIC. With custom silicon, the switch performs better and provides more features at the cost of incompatibility with open standards SDN. The advantage here could still be with Cisco. Cisco’s history of performance, scalability and its application-centric nature, where the network is made aware of the application demands rather than application behaving according to the network, may resonate with customers more than networks that are quick to set up and reconfigure.


Get Your Popcorn Ready

Networks exist to deliver application services and every network admin strives to provide the best performance at the lowest cost. With Cisco promoting ACI as a solution for application-aware networks, users may still see more value with ACI than with NSX. Cisco has also painted software-centric approaches like NSX as not scalable, providing limited multi-hypervisor support and an integrated security challenge. It is yet to be seen what troubleshooting and monitoring looks like on NSX.


It’s most likely that Cisco will continue down the middle path like it has done with the Nexus 9000, supporting both SDN and custom silicon but recommending users stick with single-vendor hardware if they’re concerned about scalability, performance and features that commodity SDN or NSX cannot provide. So, while ACI may not kill SDN or NSX, it may dampen VMware’s vision of market share conversion without ACI clouding the NSX landscape.

Cisco’s primary revenue has been its hardware with custom silicon and software that provides highly scalable networking for enterprises, data centers and telecommunication industries. That threatens to change with networking vendors using off-the-shelf silicon to build commodity hardware, with only software differentiating offerings from competitors. Hardware from these vendors not only provides near parity performance with established ASIC-centric products but is more affordable, threatening Cisco‘s market share.

 

The threat from SDN

Adding to the threat is that non-proprietary hardware actively encourages and supports software defined networking (SDN) as a differentiator. With datacenters getting bigger, operators seek hardware that is not only inexpensive but is also easy to manage and quickly reconfigurable when changes have to be made. SDN delivers that promise, providing flexible networks at lower costs than established vendors. Using SDN, datacenter operators can separate the physical network from the control plane, enabling easier programming and quicker management of their network. Enterprises with huge data centers, such as Google, Amazon, Apple, Facebook, etc., have already moved to in-house developed SDN using commodity hardware.

 

VMware – Friend or Foe?

VMware, like Cisco, offers proprietary solutions that work well within its own ecosystem. They are also Cisco’s partners, like with the 1000v. But VMware changed this relationship with the acquisition of Nicira, a pioneer in SDN, for $1.2 billion. After seeing the advantages of server virtualization, enterprises and network admins are looking to run their entire datacenter offers on a compelling alternative. VMware NSX takes a software-centric approach, a path different from Cisco’s hardware-centric solutions.


NSX network virtualization, though it separates the control and data plane, is not the same as SDN.  According to Martin Casado, OpenFlow creator and chief architect of networking at VMware, “Network virtualization and SDN are two different things and somewhat incomparable. SDN is a mechanism and network virtualization is a … solution”. Something else about NSX is that it can work on any hardware, be it custom silicon hardware like those from Cisco or general purpose hardware and that I believe gives NSX an advantage over SDN. SDN requires hardware that supports OpenFlow or similar SDN protocols, which means capex for upgrades or new hardware.


Some might believe that competition from NSX vs. SDN is good for Cisco by changing the conversation away from open standards SDN. But after watching an NSX demo, VMware’s claims it can provide large datacenters the ability to bring up a scalable network in minutes leveraging on existing hardware was impressive. And existing hardware includes both proprietary and open solutions. Rather than simply knocking SDN out of the picture, this approach also cuts directly at Cisco’s proprietary SDN-alternative vision. To add to this, many of Cisco’s rivals such as Juniper, Dell, Arista and Brocade are eagerly supporting integration with VMware’s NSX network virtualization platform.

 

For many years it has been Cisco vs. everyone else. NSX and SDN are now out to change that and provide alternative platforms. But Cisco has an answer, let’s look at that in part 2 this coming Tuesday.

In October, SolarWinds acquired Confio, maker of Ignite database performance software.  In the DBA community Ignite is well known and widely used by top tier DBAs and database developers.   For the rest of the SolarWinds community, here are the key points that will help explain what makes Ignite such a popular solution.

  1. Response Time Analysis.   From the very beginning, Ignite was built to measure the TIME it takes for the database to respond to queries.  And Ignite does not just measure time to complete a query, but tracks the time spent on each step along the way, identifying the resource bottlenecks that most contribute to the delays.  

    Why is this important?   The purpose of a SQL Server, Oracle, Sybase or DB2 database is to deliver a query response to an application.  Knowing whether the server is busy doesn’t help the application user if they are waiting on a slow database.   Rather, the key to delivering better application response is knowing which bottlenecks are causing delays and how to fix them.

    Think of a real life comparison.  If you want to minimize the time it takes to drive to work, your best bet is understanding how much time is spent at each stop-light on the way, then figuring out how to avoid the long ones (Wait Time).   That is more effective than focusing on how fast your engine is running (CPU or server health).     More about Response Time Analysis.

    Ignite measures the time spent at bottleneck steps called Wait Types or Wait Events, then analyzes and tells the DBA or Developer how to get faster results. No other tool gives the same wait details, or ties them into Response Time Analysis. 

  2. Big Bars Bad.   Response time data is complex, with hundreds of Wait Types for every query processed, along with many other details like execution plans, locks, programs, users, and objects touched along the way.   Ignite makes it simple, clearly showing response times in the form of bar charts at every level of detail.   When you look for a problem in Ignite, just look for the “longest bar”, because that is the most critical problem to focus on.    Ignite users know that the longest bars are the key to their performance issues, and quickly learn from Confio that Big Bars are Bad.   Nothing else is as simple.
    confio1.png
    Which query is the biggest problem?


  3. 4 Clicks to the Problem.   Serious DBAs use serious tools, and some of them have extremely complex screens of statistics that few DBAs or Developers can understand.  Ignite excels at making it easy to get to the root cause of the problem, typically in about four clicks.  This means that everyone on the DBA team, application developers, or managers can use Ignite without specialize training.  
    By clicking on the alarms that indicate problems, and following the big bars, Ignite users get right down to the root cause.   More details on Ignite.
    confio2.png

Section of Ignite home screen, with red alerts designating the first click to find the problem

Filter Blog

By date:
By tag: