1 2 Previous Next

Geek Speak

23 Posts authored by: joeld

Not to be outdone in the race for federal network modernization, the United States Army last year issued the Army Network Campaign Plan (ANCP). Created by Lt. Gen. Robert Ferrell, the Army’s outgoing director, the ANCP “outlines current efforts that posture the Army for success in a cloud-based world” and provides “the vision and direction that set conditions for and lay a path to Network 2020 and beyond.”

 

These broad and bold statements encompass several things. First, there’s the Army’s desire to create a network that aligns with DISA’s Joint Information Environment (JIE) and the Defense Department’s modernization goals, which include better insight into what’s happening within its networks and tighter security postures. Second, there’s the pressing need to vastly improve the services the Army is able to deliver to personnel, including, as outlined in the ANCP, everything from “lighter, more mobile command posts to austere environments that will securely connect the network and access information.”

 

How unifying operations and security fits into the ANCP

 

The need for greater agility outlined in the ANCP dictates that operations and security teams become more integrated and unified. The responsibilities of one can have a great impact on the other. Working together, cybersecurity and operations teams can share common intelligence that can help them more quickly respond to threats and other network problems.

 

Similarly, the solutions that managers use to monitor the health and security of their networks should offer a combination of features that address the needs of this combined team. As such, many of today’s network monitoring tools not only report on the overall performance of the network, but also provide indications of potential security threats and remediation options.

 

Why letting go of the past is critical to the success of the ANCP


Combing operations and security teams is a new concept for many organizations and it requires letting go of past methodologies. The same mindset that contributes to that effort should also be applied to the types of solutions the Army uses moving forward, because the ANCP will not be successful if there is a continued dependence on legacy IT solutions.

 

It used to be fine for defense agencies to throw their lots in with one software behemoth controlling large segments of their entire IT infrastructure, but those days of expensive, proprietary solutions are over. Army IT professionals are no longer beholden to the technologies that may have served them very well for the past few decades, because the commercial market for IT management tools  now has lightweight, affordable, and easy-to-deploy solutions. The willingness to let go of the past is the evolution of federal IT, and is at the heart of all modernization efforts.

 

The fact that Ferrell and his team developed a plan as overarching as the ANCP indicates they are not among this group of IT leaders. In fact, the plan itself shows vision and a great desire to help the Army “be all it can be.” Now, the organization just needs to fully embrace new methodologies and technologies to reach that goal.

 

To read the extended article on Defense Systems

Remember, back in the day, when you’d go to a website and it was down? Yes, down. We’ve come a long way in a short time. Today it’s not just down that is unacceptable, users find it unacceptable and get frustrated if they have to wait more than three seconds for a website to load.

 

In today’s computing environments, slow is the new down and a slow application in a civilian agency means lost productivity, but a slow military application in theater can mean the difference between life and death. Due to a constantly increasing reliance on mission critical applications, the government must now meet, and in most cases surpass, the high performance standards that are being set by the commercial industry, and the stakes continue to get higher.

 

Most IT teams focus on the hardware, after blaming and ruling out the network of course. If an application is slow, the first thought is to add hardware: more memory, faster processors, upgrade storage to SSD drives, etc. – to combat the problem. Agencies have spent millions of dollars throwing hardware at application performance issues without a good understanding of the bottleneck slowing down an application.

However, according to a recent survey on application performance management by research firm Gleanster, LLC, the database is the number one source of issues with application performance, in fact 88 percent of respondents cite the database as the most common challenge or issue with application performance.

 

Trying to identify database performance issues poses several unique challenges:

  • Databases are complex. Most people think of a database as this mysterious black box of secret information and are wary to dig too deep.
  • There are a limited number of tools that assess database performance. Tools normally assess the health of a database (is it working, or is it broken?) and don’t identify and help remediate specific database performance issues.
  • Database monitoring tools that do provide more information don’t go that much deeper. Most tools send information in and collect information from the database, with little to no insight about what happens inside the database that can impact performance.

To successfully assess database performance and uncover the root cause of application performance issues, IT pros must look at database performance from an end-to-end perspective.

 

In a best-practices scenario, the application performance team should be performing wait-time analysis as part of their regular application and database maintenance. A thorough wait-time analysis looks at every level of the database—from individual SQL statements to overall database capacity—and breaks down each step to the millisecond.

 

The next step is to look at the results, then correlate the information and compare. Maybe the database spends the most time writing to disk; maybe it spends more time reading memory.

 

Ideally, all federal IT shops should implement regular wait-time analysis as a baseline of optimized performance. Knowing how to optimize performance—and understanding that it may have nothing to do with hardware—is a great first step toward staying ahead of the growing need for instantaneous access to information.


Read an extended version of this article on GCN

With its ongoing effort toward a Joint Information Environment, the Defense Department is experiencing something that’s extremely familiar to the enterprise world: a merger. The ambitious effort to consolidate communications, services, computing and enterprise services into a single platform is very similar to businesses coming together and integrating disparate divisions into a cohesive whole. Unlike a business merger, however, JIE will have a major impact on the way the DOD IT is run, ultimately providing better flow of and access to information that can be leveraged throughout all aspects of the department.

 

When JIE is complete, DOD will have a single network that will be much more efficient, secure and easier to maintain. IT administrators will have a holistic view of everything that’s happening on the network, allowing them to pinpoint how one issue in a specific area can not only be detrimental to that portion of the network but also how it impacts other areas.

 

The JIE’s standard security architecture also means that IT managers will be able to more easily monitor and corner potential security threats and respond to them more rapidly. The ability to do so is becoming increasingly important, as is evidenced by our recent survey, which illustrated the rise of cybersecurity threats.

 

As DOD kicks the JIE process into high gear, they are establishing Joint Regional Security Stacks (JRSS) which are intended to increase security and improve effectiveness and efficiency of the network. However, the network will still be handling data from all DOD agencies and catering to thousands of users, making manual network monitoring and management of JRSS unfeasible. As such, IT pros will want to implement Network Operations (NetOps) processes and solutions that help support the efforts toward greater efficiency and security.

 

The process should begin with an assessment of the current NetOps environment. IT pros must take inventory of the monitoring and management NetOps tools that are currently in use and determine if they are the correct solutions to help with deploying and managing the JIE.

 

Network managers should then explore the development of a continuous monitoring strategy, which can directly address DOD’s goals regarding efficiency and security.

 

Three key requirements to take into account in planning for continuous monitoring in JIE are:

 

  • Optimization for dual use. Continuous network monitoring tools, or NetOps tools, can deliver different views of the same IT data while providing insight and visibility to the health and performance. When continuous monitoring is implemented with “dual use” tools they can serve two audiences simultaneously. 
  • Understanding who changed what. With the implementation of JIE, DOD IT pros will be responsible for an ever-expanding number of devices connected to the network, and this type of tool enables bulk change deployment to thousands of devices.
  • Tracking the who, what, when and where of security events. Security information and event management (SIEM) tools are another particularly effective component of continuous monitoring, and its emphasis on security and could be an integral part of monitoring JRSSs. SIEM capabilities enable IT pros to gain valuable insight into who is logging onto DOD’s network and the devices they might be using, as well as who is trying to log in but being denied access.

 

Like any merger, there are going to be stumbling blocks along the way to the JIE’s completion, but the end result will benefit many – including overworked IT pros desperate for greater efficiency. Because while there’s no doubt the JIE is a massive undertaking, managing the network that it creates does not have to be.

 

To read an extended version of this article, visit Defense Systems

When CIO’s and key stakeholders were issued guidance on the implementation of IT shared services as part of a key strategy to eliminate waste and duplication back in 2012, it remained to be seen how quickly implantation would take place and how much benefits would permeate into agencies. As recently as April, we were still talking about how IT infrastructures still remain “walled off within individual agencies, between agencies, and even between bureaus and commands.” So we decided to take a closer look and ask federal IT pros what they are seeing on the ground.

 

Partnering with government research firm, Market Connections to survey 200 federal IT decision makers and influencers, we drilled down into their view of IT shared services. To start with, we provided a universal definition of IT shared services, it covers the entire spectrum of IT service opportunities, either within or across federal agencies, and where previously that service had been found in each individual agency. Is this how you define IT shared services?

To set the scene, only 21 percent of respondents indicated that IT shared services was a priority in terms of leadership focusfalling very close to the bottom of all priorities. Additionally, the IT pros surveyed indicated that they felt like they’re in control of shared services within their environment.

 

However, we are impressed with the amount of IT services being sharedspecifically, that 64 percent of DoD respondents indicated being recipients of shared services. The DoD adoption of enterprise email, a shared service, is probably the most visible and widespread use of a shared service in the DoD. DISA provides agencies with the ability to purchase an enterprise email system directly from its website and provide support services on a wide scale.

slide 1.jpg

Additionally, over 80 percent of respondents think that IT shared serviceseither within government or outsourcedprovides superior end-user experience. A large portion of respondents also believe that IT shared services benefit all stakeholders, including IT department personnel, agency leadership, and citizens.

slide 2.jpg

IT shared service implementation seems to still be facing an uphill battle and typical change management challenges. However, IT pros have still identified some key benefits, including: saving money, achieving economies of scale, standardized delivery and performance, and opportunities to innovate. And this is good to see, because these were many of the objectives of shared services to begin with.

slide 3.jpg

Are you using IT shared services in your environment? What challenges are you experiencing in implementation? What benefits are your end-users seeing?

Full survey results: http://www.solarwinds.com/assets/industry/surveys/solarwinds-state-of-government-it-management-and-m.aspx

In our second visit back to our 2015 pro-dictions, let’s take a look at the evolution of the IT skill set. It appears Patrick Hubbard was spot-on in his prediction about IT pros needing to broaden their knowledge bases to stay relevant. IT generalists are going the staying with what they know best, while IT versatilists and specialists are paving the way to the future.

PHUB_update.jpg


Earlier this year, kong.yang wrote a blog addressing this very topic, which he dubbed the Stack Wars. There, he lightly touched on generalists, versatilists, and specialists. Read the following article to get a deeper look at each of the avenues IT pros can pursue: “Why Today’s Federal IT Managers May Need to Warm Up to New Career Paths.”

 

Fans of spy fiction (of which there are many in the government ranks) might be familiar with the term “come in from the cold.” It refers to someone who has been cast as an outsider and now wishes to abandon the past, be embraced, and become relevant again.

 

It’s a term that reflects the situation that many federal IT managers find themselves in as government agencies begin focusing on virtualization, automation and orchestration, cloud computing, and IT-as-a-Service. Those who were once comfortable in their position as jacks-of-all-IT-trades are being forced to come in from the cold by choosing new career paths to remain relevant.

 

Today, there’s very little room for “IT generalists.” A generalist is a manager who possesses limited knowledge across many domains. They may know how to tackle basic network and server issues, but may not understand how to design and deploy virtualization, cloud, or similar solutions that are becoming increasingly important for federal agencies.

 

And yet, there’s hope for IT generalists to grow their careers and become relevant again. That hope lies in choosing between two different career paths: that of the “IT versatilist” or “IT specialist.”

 

The IT Versatilist

An IT versatilist is someone who is fluent in multiple IT domains. Versatilists have broadened their knowledgebase to include a deep understanding of several of today’s most buzzed-about technologies. Versatilist can provide their agencies with the expertise needed to architect and deliver a virtualized network, cloud-based services, and more. Versatilists also have the opportunity to have a larger voice in their organization’s overall strategic IT direction, simply by being able to map out a future course based on their familiarity surrounding the deployment of innovative and flexible solutions.

 

The IT Specialist

Like versatilists, IT specialists have become increasingly valuable to agencies looking for expertise in cutting edge technologies. However, specialists focus on a single IT discipline. This discipline is usually directly tied to a specific application. Still, specialists have become highly sought-after in their own right. A person who’s fluent in an extremely important area, like network security, will find themselves in-demand by agencies starved for security experts.

 

Where does that leave the IT generalist?

Put simply – on the endangered list.

Consider that the Department of Defense (DoD) is making a major push toward greater network automation. Yes, this helps takes some items off the plates of IT administrators – but it also minimizes the DoD’s reliance on human interference with its technologies. While the DoD is not necessarily actively trying to replace the people who manage their networks and IT infrastructure, it stands to reason that those who have traditionally been “keeping the lights on” might be considered replaceable commodities in this type of environment.

 

If you’re an IT generalist, you’ll want to expand your horizons to ensure that you have a deep knowledge and expertise of IT constructs in at least one relevant area. Relevant is defined as a discipline that is considered critically important today. Most likely, those disciplines will center on things like containers, virtualization, data analytics, OpenStack, and other new technologies.

 

Whatever the means, generalists must become familiar with the technologies and methodologies that are driving federal IT forward. If they don’t, they risk getting stuck out in the cold – permanently.

 

**Note: this article was originally published in Technically Speaking

If you recall, to kick off the New Year, the Head Geeks made predictions forecasting how they thought the year in IT would unfold. Now that we’re past the mid point in 2015, I thought it would be fun to revisit some of their predictions over the coming weeks.


So, to kick things off, let’s start with the following prediction:

Security  prediction.jpg

It’s safe to say that this prediction from adatole holds true. Security issues can be devastating for a company, let’s take a look at this related article, “preventing a minor, insider accident from becoming a security catastrophe.

 

There are accidents – and then there are accidents.


A dog eating a kid’s homework is an accident. Knocking over a glass of water is an accident. A fender-bender at a stop sign is an accident.

The incorrect use of personal devices or the inadvertent corruption of mission-critical data by a government employee can turn out to be more than simple accidents, however. These activities can escalate into threats that can result in national security concerns.


These types of accidents happen more frequently than one might expect — and they’ve got DOD IT professionals worried. Because for all of the media and the government’s focus on external threats — hackers, terrorists, foreign governments, etc. — the biggest concern continues to be threats from within.


As a recent survey by my company, SolarWinds, points out, administrators are especially cognizant of the potential for fellow colleagues to make havoc — inducing mistakes. Yes, it’s true: DOD technology professionals are just as concerned about the person next to them making a mistake as they are of an external anonymous-style group or a rogue hacker.

So, what are agencies doing to tackle internal mistakes? Primarily, they’re bolstering federal security policies with their own security policies for end-users.

While this is a good initial approach, it’s not nearly enough.


IT professionals need more than just intuition and intellect to address compromises resulting from internal accidents. Any monitoring of potential security issues should include the use of technology that allows IT administrators to pinpoint threats as they arise, so they may be addressed immediately and without damage.


Thankfully, there are a variety of best practices and tools that address these concerns and nicely complement the policies and training already in place, including:

  • Monitoring connections and devices on the network and maintaining logs of user activity to track: where on the network certain activity took place, when it occurred, what assets were on the network, and who was logged into those assets.
  • Identifying what is or was on the network by monitoring network performance for anomalies, tracking devices, offering network configuration and change management, managing IT assets, and monitoring IP addresses.
  • Implementing tools identified as critical to preventing accidental insider threats, such as those for identity and access management, internal threat detection and intelligence, intrusion detection and prevention, SIEM or log management, and Network Admission Control.

Our survey respondents called out each of these tools as useful in preventing insider threats. Together and separately, they can assist in isolating and targeting network anomalies. Log and event management tools, for example, can monitor the network, detect any unauthorized (or, in this case, accidental) activity, and generate instant analyses and reports. They can help IT professionals correlate a problem — say, a network outage — directly to a particular user. That user may or may not have inadvertently created an issue, but it doesn’t matter. The software, combined with the policies and training, can help administrators attack it before it goes from simple mistake to “Houston, we have a problem.”

The fact is, data that’s accidentally lost can easily become data that’s intentionally stolen. As such, you can’t afford to ignore accidental threats, because even the smallest error can turn into a very large problem.


**Note: This article was originally published by Defense Systems.**

joeld

A Lap around AppStack

Posted by joeld Jan 30, 2015

What is AppStack?

 

AppStack is a new technology that brings multiple SolarWinds products together in a new innovative way. AppStack provides visibility to the different infrastructure elements that are supporting an application deployed in production, but most importantly how they are related to each other.

 

For instance, a typical multi-tier application will be deployed on multiple virtual machines located on one or more hypervisors and accessing one or more datastores. Each of those components uses some level of storage that can either be direct or network attached.

 

 

AppStack allows an application owner or IT operator to quickly identify the key infrastructure elements that might be impacting application delivery.

 

The Orion Platform

 

Let’s take a quick look at how SolarWinds products are architected in the first place. SolarWinds has a large portfolio of products that can be separated into two main categories. The first category contains all the products primarily focused on monitoring various parts of the IT infrastructure and run on top of the Orion Platform. The second category is composed of all the other tools that are not running on top of Orion. AppStack is related to the products belonging to the first category, such as Server & Application Monitor (SAM), Virtualization Manager (VMan), and Storage Resource Monitor (SRM).

 

The Orion platform provides a core set of services that are used by the various products that run on top of it. Some of the most important services exposed by the platform are alerting, reporting, security, UI framework, data acquisition, and data storage

as shown in the following diagram:

 

 

The products that run on top of Orion are plug-ins that take advantage of those services and are sold individually.

 

How does AppStack work?

 

AppStack is not a new product by itself, but rather, it extends the capabilities of the Orion platform in various ways, allowing all the products running on top of Orion to take advantage of those new features.

 

One of the key enablers of AppStack is the information model that is being maintained by the Orion platform. In the background, Orion maintains a very rich representation of how the entities monitored by the various products relate to each other. This information model is maintained by the Information Service which exposes a set of APIs (SOAP and Restful) to allow easy access to this information. The information model has been a key foundation of each individual product for several years and is now being leveraged to another level by AppStack.

 

As the following diagram shows, the metadata model is extended and populated by the different products installed on top of Orion. By default, Orion ships with a very minimal model. Its sole purpose is to serve as the foundation of the product models. One key aspect of the model is that it does not require different products to know about each other, but still allows them to extend each other as they are purchased and installed by our users over time. For instance, Server & Application Monitor does not need to know about our Virtualization Product in order for the Application entity to be related to the Virtual Machine entity.

 

 

 

 

Having a good set of entities is a good start, but the key to success is to ensure that the relationships between those entities are also captured as part of the model. AppStack exploits the presence of those relationships in the information model to tie together the different pieces of infrastructure that support an application.


In the AppStack dashboard, every time the user clicks on one of the entities displayed, behind the scenes AppStack retrieves the list of entities that are directly or indirectly related to that selected entity and highlights them in the user-interface. This provides a very intuitive way of troubleshooting application delivery issues.

 

An important point to mention is that all of this happens without any prior configuration of the user.

 

This blog post provided you a quick overview of what AppStack is and how it works behind the scenes. If you have any questions, don’t hesitate to reach out to me on twitter at @jdolisy.

VMware’s big hype for VMworld 2013 in San Francisco was all about bringing software-defined data center capabilities to market.  Since the compute aspect of the data center has pretty advanced virtualization capabilities (i.e., “software-defined”) that meant the primary focus was on advancing the network and storage capabilities.  While VMware’s storage announcements help meet that goal, in general they were not as far along as the networking aspect and sound a lot like what Microsoft has been doing with their recent releases.  The networking capabilities they announced with their NSX capabilities look more mature and potentially ready for deployment in the right situation.  

 

NSX is really the next logical evolution for VMware.  Given their dominance in compute virtualization they would like to extend that dominant position to the rest of the data center.  They pretty much announced their direction and intent last year with the Nicira acquisition and a pretty similar set of directional statements at VMworld 2012. This year we got more details of how things will really work.  

 

At a high level, NSX is focused on taking over “east-west” network traffic as they described the traffic between VMs that pass through the networking infrastructure.  VMware claimed that as much as 60% to 70% of network traffic is traffic between VMs with the remainder being the traffic between the VMs/data center and the external network (i.e., “north-south” traffic). NSX capabilities will include virtual switch, router, firewall and load balancer capabilities. VMware is using a fusion of their existing vSphere vSwitch and the Nicira technology in the solution.  It actually consists of two products, NSX for vSphere and NSX for multiple hypervisors.

 

Currently, the network is often the bottleneck when it comes to dynamic workload placement. Moving a workload from one host to another hypervisor, including storage, can be done in a matter of minutes.  However, if network reconfiguration is required, this can often require days to complete.  NSX provides complete network stack encapsulation over the existing Layer 3 physical network.  This provides an opportunity to move the network to the same level of encapsulation as compute and storage, allowing snapshotting, rollback and cloning along with the potential to provision or reconfigure in a matter of minutes.

 

But VMware’s NSX announcement does raise a number of interesting questions.  Some of these key questions and initial thoughts are provided below.

 

* How does NSX impact physical network architecture?  Should customers rethink their basic network design?

* This could change the primary goal of physical network design to be focused on high-availability and performance, not necessarily on application traffic segregation anymore.

·         How to do you manage and monitor the comprehensive network health and performance?

* Who is responsible for the network issues?

* Network engineers and admins will still be needed, all the protocol alphabet soup is still there when it comes to configuration and interop.

* How fast will the software be adopted versus other efforts such as OpenFlow?

* It is likely to have faster adoption for a number of reasons:

o NSX will have no dependence on physical switches

o No multi-vendor compatibility issues

o Complete control over the inner protocols and implementation

o Functionality will be built-in the hypervisor

* Where is the competition relative to VMware?  

* VMware has leaped over Microsoft once again. Microsoft brought interesting networking solution with in Hyper-V v3 in Server 2012, but those look a lot less advanced compared to NSX.

* How Will VMware expose virtualization monitoring and management capabilities for NSX?

* This was not clear from the VMworld 2013 and is still an open question

* Some diagnostic tools were demonstrated but to be successful those capabilities need to be integrated with existing solutions.

*  vCOps will be updated to provide visibility at both levels, but it's not clear how soon that will be available.

 

In summary, the virtual networking capability is an impressive innovation brought forward by VMware. As any new disruptive technology brought to the market place, it comes with its set of questions and uncertainties. It now potentially puts VMware in control of the last technology pillar that is needed to make the SDDC a reality. Vendors like SolarWinds will monitor those changes and ensure that their existing and future customers maximize their investments in those new technologies while still relying on their monitoring and management solution to provide them the insight they need.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.