1 2 Previous Next

Geek Speak

23 Posts authored by: joeld

Within the government, particularly the U.S. Defense Department, video traffic—more specifically videoconference calling—often is considered mission critical.


The Defense Department uses video for a broad range of communications. One of the most critical uses is video teleconference (VTC) inside and outside of the United States and across multiple areas of responsibility. Daily briefings—via VTC over an Internet protocol (IP) connection—help keep everyone working in sync to accomplish the mission. So, you can see why it is so important for the network to be configured and monitored to ensure that VTCs operate effectively.


VTC and network administration tasks boil down to a few key points:


  • Ensuring the VTC system is up and operational (monitoring).
  • Setting up the connections to other endpoints (monitoring).
  • Ensuring that the VTC connection operates consistently during the call (quality of service) Troubleshooting at the VTC system level (VTC administration), and after the connection to the network, the network administrator takes over to ensure that the connection stays alive (monitoring/configuration).


Ensuring Quality of Service for Video over IP


The DOD has developed ways to ensure successful live-traffic streaming over an IP connection. These requirements focus on ensuring that video streaming has the low latency and high throughput needed among all endpoints of a VTC. Configuring the network to support effective VTCs is challenging, but it is done through implementing quality of service (QoS).


You can follow these four steps:


Step 1: Establish priorities. VTC traffic will need high priority. Email would likely have the lowest priority, while streaming video (vs. VTC) will likely have a low priority as well.


Step 2: Test your settings. Have you set up your QoS settings so that VTC traffic has the highest priority?


Step 3: Implement your settings. Consider an automated configuration management tool to speed the process and eliminate errors.


Step 4: Monitor your network. Once everything is in place, monitor to make sure policies are being enforced as planned and learn about network traffic.


Configuring and Monitoring the Network


Network configuration is no small task. Initial configuration and subsequent configuration management ensures routers are configured properly, traffic is prioritized as planned and video traffic is flowing smoothly.


Network configuration management software that automates the configuration tasks of implementing complex QoS settings can be useful, and should support the automation of:


  1. Pushing out QoS settings to the routers- QoS settings are fairly complex to implement. It is important that implementation of settings is not done manually, due to errors.
  2. Validating that the changes have been made correctly- After the settings are implemented on a router, it is important to back up and verify the configuration settings.
  3. Configuration change notification.


Network monitoring tools help validate critical network information, and should provide you with the following information:


  1. When and where is my network infrastructure busy?
  2. Who is using the network at those hot spots and for what purpose?
  3. When is the router dropping traffic, and what types of packets are being dropped?
  4. Identify if your side or the far side of the VTC call systems are up and operational.
  5. Identify via node and interface baselines to identify abnormal spikes during the day.


What are your best practices for ensuring video traffic gets through? Do you have any advice you can share?


Find the full article on Signal.

The federal technology landscape has moved from secure desktops and desk phones to the more sprawling environment of smartphones, tablets, personal computers, USB drives and more. The resulting “device creep” can often make it easier for employees to get work done – but it can also increase the potential for security breaches.


Almost half of the federal IT professionals who responded to our cyber survey last year indicated that the data that is most at risk resides on employee or contractor personal computers, followed closely by removable storage tools and government-owned mobile devices.


Here are three things federal IT managers can do to mitigate risks posed by these myriad devices:


1. Develop a suspicious device watch list.


As a federal IT manager, you know which devices are authorized on your network – but, more importantly, you also know which devices are not. Consider developing a list of unapproved devices and have your network monitoring software automatically send alerts when one of them attempts to access the network.


2. Ban USB drives.


The best bet is to ban USB drives completely, but if you’re not willing to go that far, invest in a USB defender tool. A USB defender tool in combination with a security information and event management (SIEM) will allow you to correlate USB events with other potential system usage and/or access violations to alert against malicious insiders.


They can be matched to network logs which help connect malicious activities with a specific USB drive and its user. They can also completely block USB use and user accounts if necessary. This type of tool is a very important component in protecting against USB-related issues.


3. Deploy a secure managed file transfer (MFT) system.


Secure managed file transfer systems can meet your remote storage needs with less risk.


File Transfer Protocol (FTP) used to get a bad rap as being unsecure, but that’s not necessarily the case. Implementing a MFT system can install a high-level of security around FTP, while still allowing employees to access files wherever they may be and from any government-approved device.


MFT systems also provide IT managers with full access to files and folders so they can actively monitor what data is being accessed, when and by whom. What’s more, they eliminate the need for USBs and other types of remote storage devices.


Underlying all of this, of course, is the need to proactively monitor and track all network activity. Security breaches are often accompanied by noticeable changes in network activity – a spike in afterhours traffic here, increased login attempts to access secure information there.


Network monitoring software can alert you to these red flags and allow you to address them before they become major issues. Whatever you do, do not idly sit back and hope to protect your data. Instead, remain ever vigilant and on guard against potential threats, because they can come from many places – and devices.


Find the full article on Government Computer News.

Today’s federal IT infrastructure is built in layers of integrated blocks, and users have developed a heavy reliance on well-functioning application stacks. App stacks are composed of application code and all of the software and hardware components needed to effectively and reliably run the applications. These components are very tightly integrated. If one area has a problem, the whole stack if effected.


It’s enough to frustrate even the most hardened federal IT manager. But don’t lose your cool. Instead, take heart, because there are better ways to manage this complexity. Here are areas to focus on, along with suggested methodologies and tools:


1. Knock down silos and embrace a holistic viewpoint.


Thanks to app stacks, the siloed approach to IT is quickly becoming irrelevant. Instead, managing app stacks requires realizing that each application serves to support the entire IT foundation.


That being said, you’ll still need to be able to identify and address specific problems when they come up. But you don’t have to go it alone; there are tools that, together, can help you get a grasp on your app stack.


2. Dig through the code and use performance monitoring tools to identify problems.


There are many reasons an app might fail. Your job is to identify the cause of the failure. To do that you’ll need to look closely at the application layer and keep a close eye on key performance metrics using performance monitoring tools. These tools can help you identify potential problems, including memory leaks, service failures and other seemingly minor issues that can cause an app to nosedive and take the rest of the stack with it.


3. Stop manually digging through your virtualization layers.


It’s likely that you have virtualization layers buried deep in your app stack. These layers probably consist of virtual machines that are frequently migrated from one physical server to another and storage that needs to be reprovisioned, reallocated and presented to servers.


Handling this manually can be extremely daunting, and identifying a problem in this can seem impossible. Consider integrating an automated VM management approach with the aforementioned performance monitoring tools to gain complete visibility of these key app stack components.


4. Maximize and monitor storage capabilities.


Storage is the number one catalyst behind application failures. The best approach here is to ensure that your storage management system helps monitor performance, automate storage capacity and regularly reports so you can ensure applications continue to run smoothly.


You’ll be able to maintain uptime, leading to consistent and perhaps increased productivity throughout the organization. And you’ll be able to keep building your app stack – without the fear of it all tumbling down.


Find the full article on Government Computer News.

Tomorrow is the first day of the Caribbean hurricane season, and that means: named storms, power outages and the need for IT emergency preparedness. And now is a great time to make sure your disaster toolbox is well stocked, before a major calamity strikes. And as a federal IT manager, you always have to be prepared for the unnatural disaster, such as a cyber-attack.


The scary thing is that even the idea of creating a disaster recovery plan has been put on the backburner at many government agencies. In fact, according to a federal IT survey we conducted last year, over 20 percent of respondents said they did not have a disaster preparedness and response plan in place.


We suggest that you make sure you have a plan in place, and follow these best practices:


Continuously monitor the network. Here’s a phrase to remember: “collect once, report to many.” This means installing software that automatically and continuously monitors IT operations and security domains, making it easier for federal IT managers to pinpoint – or even proactively prevent – problems related to network outages and system downtime.


Continuous monitoring can give IT professionals the information needed to detect abnormal behavior much faster than manual processes. This can help federal managers react to these challenges quickly and reduce the potential for extended downtime.


Monitor devices, not just the infrastructure. You need to keep track of all of the devices that impact your network, including desktops, laptops, smartphones and tablets.


For this, consider implementing tools that can track individual devices. First, devise a whitelist of devices acceptable for network access. Then, set up automated alerts that notify you of non-whitelisted devices tapping into the network or any unusual activity. Most of the time, these alerts can be tied directly to specific users. This tactic can be especially helpful in preventing those non-weather-related threats I referred to earlier.


Plan for remote network management. There’s never an opportune time for a disaster, but some occasions are just, well, disastrous. For example, when a hurricane knocks out electricity in your data center and you’re stuck at home thinking, “Yeah, right.” In such cases, you’ll want to make sure you have software that allows you to remotely manage and fix anything that might adversely impact your network.


Remote management technology typically falls into two categories: in-band and out-of-band remote management. Both get the job done for their particular circumstances. And, there are some instances where remote management is insufficient. It’s perfectly adequate when your site loses power, or your network goes offline, but in the face of a major catastrophe – massive floods, for example – you’ll need onsite management. In many cases, however, remote management tools will be more than enough to get you through some rough spots without you having to get to the office.


Each of these best practices, and the technologies associated with them, are like backup generators. You may never need to use them, but when and if you do, you’ll be glad you have them at your disposal.


Find the full article on Government Computer News.

If you've ever read the “Adventures of Sherlock Holmes” by Sir Arthur Conan Doyle, you're probably familiar with some of the plot contrivances. They usually entail a highly complex scheme that involves different machinations, takes twists and turns, and requires the skills of none other than The World's Greatest Detective to solve.


Today's government networks are a bit like a Holmes story. They involve many moving parts, sometimes comprising new and old elements working together. And they are the central nervous system of any IT application or data center infrastructure environment – on premise, hosted, or in the cloud.


That's why it's so important for IT pros to be able to quickly identify and resolve problems. But the very complexity of these networks can often make that task a significant challenge.


When that challenge arises, it requires skills of a Sherlockian nature to unravel the diabolical mystery surrounding the issue. And, as we know, there's only one Sherlock Holmes, just as there's only one person with the skills to uncover where the network problems lie.


That would be you, my dear federal IT professional.


Your job has changed significantly over the past couple of years. Yes, you still have to "keep the lights on," as it were, but now you have even greater responsibilities. You've become a more integral, strategic member of your agency, and your skills have become even more highly valued. You're in charge of the network, the foundation for just about everything that takes place within your organization.


To keep things flowing, you need to get a handle on everything taking place within your network, and the best way is through a holistic network monitoring approach.


Holistic network monitoring requires that all components of the network puzzle – including response time, availability, performance and devices -- are analyzed and accounted for. These days, it also means taking into consideration the many applications that are tied together across wireless, LAN, WAN, and cloud networks, not to mention the resources (such as databases, servers, virtualization, storage) they use to function properly.


Network monitoring and performance optimization solutions help solve the mystery entwined within this diabolical complexity. They can help you identify and pinpoint issues before they become real issues – security threats, such as detection of malware and rogue devices, but also productivity threats, including hiccups that can cause outages and downtime.


And, let's not forget a key perpetrator to poor application performance: network latency. Network monitoring tools allow you to automatically and continuously monitor packets, application traffic, response times and more. Further, they provide you with the ability to respond quickly to potential issues, and the ability to do this is absolutely critical.


As Sherlock said in “A Study in Scarlet,” "there is nothing like first-hand evidence." Network monitoring solutions provide just that – first-hand evidence of issues as they arise, wherever they may take place within the network. As such, implementing a holistic approach to network management can make solving even the biggest IT mysteries elementary.


Find the full article on Defense Systems.

It was all about the network


In the past, when we thought about IT, we primarily thought about the network. When we couldn’t get email or access the Internet, we’d blame the network. We would talk about network complexity and look at influencers such as the number of devices, the number of routes data could take, or the available bandwidth.


As a result of this thinking, a myriad of monitoring tools were developed to help the network engineer keep an eye on the availability and performance of their networks and they provided basic network monitoring.


It’s now all about the service


Today, federal agencies cannot function without their IT systems being operational. It’s about providing critical services that will improve productivity, efficiency, and accuracy in decision making and mission execution. IT needs to ensure the performance and delivery of the application or service, and understand the application delivery chain.


Advanced monitoring tools for servers, storage, databases, applications, and virtualization are widely available to help diagnose and troubleshoot the performance of these services, but one fact remains: the delivery of these services relies on the performance and availability of the network. And without these critical IT services, the agency’s mission is at risk.


Essential monitoring for today’s complex IT infrastructure


Users expect to be able to connect anywhere and from anything. Add to that, IT needs to manage legacy physical servers, new virtual servers, and cloud infrastructure as well as cloud-based applications and services, and it is easy to see why basic monitoring simply isn’t enough. This growing complexity requires advanced monitoring capabilities that every IT organization should invest in.


Application-aware network performance monitoring provides visibility into the performance of applications and services as a result of network performance by tapping into the data provided by deep packet inspection and analysis.


With proactive capacity forecasting, alerting, and reporting, IT pros can easily plan for future needs, making sure that forecasting is based on dynamic baselines and actual usage instead of guesses.


Intelligent topology-aware alerts with downstream alert suppression will dramatically reduce the noise and accelerate troubleshooting.


Dynamic real-time maps provide a visual representation of a network with performance metrics and link utilization. And with the prevalence of wireless networks, adding wireless network heat maps is an absolute must to understand wireless coverage and ensure that employees can reach critical information wherever they are.


Current and detailed information about the network’s availability and performance should be a top priority for IT pros across the government. However, federal IT pros and the networks that they manage are responsible for delivering services and data that ensure that critical missions around the world are successful and that services are available to all citizens whenever they need them. This is no small task. Each network monitoring technique I discussed provides a wealth of data that federal IT pros can use to detect, diagnose, and resolve network performance problems and outages before they impact missions and services that are vital to the country.


Find the full article on our partner DLT’s blog, TechnicallySpeaking.

Data center consolidations have been a priority for years, with the objectives of combatting server sprawl, centralizing and standardizing storage, and streamlining application management and establishing shared services across multiple agencies.


But, consolidation has created challenges for federal IT professionals, including:

  • Managing the consolidation without an increase in IT staff
  • Adapting to new best practices like shared services and cloud computing
  • Shifting focus to optimizing IT through more efficient computing platforms


Whether agencies have finished their consolidation or not, federal IT pros have definitely felt the impact of the change. But how do the remaining administrators manage the growing infrastructure and issues while meeting SLAs?


One way data center administrators can stay on top of all the change is to modernize their monitoring system, with the objective of improved visibility, and troubleshooting.


The Value of Implementing Holistic Monitoring


A holistic approach to monitoring provides visibility into how each individual component is running and impacting the environment as a whole. It can bridge the gap that exists between the IT team and the program groups through connected visibility.



Who is responsible for what? Shared services can be hard to navigate.


Even though the data center team now owns the infrastructure and application operations, the application owners still need to ensure application performance. Both teams require visibility into performance with a single point of truth, which streamlines communication and eases the transition to shared services.


Application Performance

Application performance is critical to executing agency missions, so when users provide feedback that an application is slow, it is up to data center administrators to find the problem and fix it—or escalate it—quickly.


Individually checking each component of the IT infrastructure—the application, servers, storage, database or a virtualized environment—can be tedious, time consuming and difficult. End-to-end visibility into how each component is performing, allows for quick identification and remediation of the issues.



Virtualization can introduce complexities and management challenges. In a virtual environment, virtual machines can be cloned and moved around so easily and often that the impact on the entire environment can be missed, especially in a dynamically changing infrastructure.


Consolidated monitoring and comprehensive awareness of the end-to-end virtual environment is the answer to effective change management in the virtualized environment.



Efficiency was a key driver behind consolidations, but this can seem near impossible for the remaining data centers. But with integrated monitoring that provides end-to-end visibility, data center administrators can troubleshoot issues in seconds instead of hours or days and proactively manage their IT. With the right tools, administrators can provide end-users with high service levels.


Consolidation is part of the new reality for data center administrators. Holistic, integrated monitoring and management of the dynamically changing IT environment will help to refine the new responsibilities of being a shared service, ensure mission-critical applications are optimized and improve visibility into virtualized environments.


Find the full article on Signal.

Today’s users demand access to easy-to-use applications even though the IT landscape has become a complex mishmash of end-user devices, connectivity methods, and siloed IT organizations, some of which contain further siloes for applications, databases and back-end storage.


These multiple tiers of complexity, combined with end users’ increasing dependency on accessible applications, creates significant difficulties for IT professionals across the globe, but especially in government agencies, with all their regulations and policies.


Figuring out how to maintain application performance in these complex environments has become a key objective for federal IT staff. Here are five methods for preserving a high-performance app stack:


1. Simplifying application stack management


A significant part of the effort lies in simplifying management of the application stack (app stack) itself, which includes the application, middleware and the extended infrastructure the application requires for performance. Think about the entire environment.


Rather than looking at networks, storage, servers and clients as distinct silos of individual responsibility, federal IT departments can reduce the complexity of the sometimes conflicting information they use to manage these silos. The simplification lies in the practice of monitoring all applications and the resources they use as a single application ecosystem, recognizing the relationships.


Working through the entire app stack lets federal IT pros understand where performance is degraded and improves troubleshooting.


2. Monitoring servers


Server monitoring is a significant part of managing the app stack. Servers are the engines that provide application services to the end user. And applications need sufficient CPU cycles, memory, storage I/O and network bandwidth to work effectively.


Monitoring current server conditions and analyzing historical usage trends is the key to ensuring problems are resolved rapidly or prevented.


3. Monitoring virtualization


Monitoring the virtualization infrastructure is key and Federal IT pros should monitor how and when VMs move from one host or cluster to another as well as the status of shared hosts, networks and storage resources, especially if they are over-subscribed.


Federal IT pros should prioritize how individual VMs on a host are working together, whether resource contention is occurring on a host or a cluster, and what applications are causing those conflicts. In addition, federal IT pros should keep tabs on network latency.


4. Monitoring user devices


Today’s users are running applications on all types of devices with a range of capabilities and connectivity options, all of which are significant factors in maintaining a healthy app ecosystem.


5. Bring it together with alerting


The last component is alerting, which notifies technicians when there is an issue with a component of the app stack prior to the first end-user noticing the problem.


The ability to set proactive performance baselines for devices and applications to signal when app stack issues arise helps both in day-to-day monitoring and future capacity planning.


In short, it’s critical for federal IT pros to be aware of, monitor and set up notifications across the app stack – from back end storage, through application services and processes to front-end users – and provide high performance from a holistic perspective.


Find the full article on Government Computer News.

Regardless of which new technologies federal network administrators adopt, they will always need dependable, consistent, and highly available solutions that keep their networks running -- constantly.


Sadly, that’s not always the reality.


Last year's survey of federal IT professionals by my company, SolarWinds, indicated that performance and availability issues continue to plague federal IT managers. More than 90 percent of survey respondents claimed that end-users in their organizations were negatively impacted by a performance or availability issue with business-critical technology over the past year, and nearly 30 percent of respondents claimed these issues occurred at least six times.


What can IT pros do about this?




Don’t worry about deploying everything in one fell swoop. Instead, take a piecemeal approach. Focus on a single implementation and make sure that particular piece of technology absolutely shines. The trick to this strategy is keeping the big picture in mind as the individual pieces of technology are deployed.




Network monitoring is a must. To do it properly, start with a baseline diagnostic that assesses the overall network performance, including availability and average response times. Once this baseline is established, look for anomalies, including configuration changes that other users may have made to the network. Find the changes, identify who made them, and factor their impact into the performance data as you identify problems and keep the network running.




Make no mistake: errors will happen, and it’s important to have a plan in place when things go south. That plan should be comprised of three facets: technology, people, and process.


First, a well-defined technology plan outlines how to best handle the different components of the network infrastructure, including monitoring and building in redundancies. That means having a backup for equipment that’s core to an agency’s network traffic.


Second, make sure the IT staff includes several people who share the same skillset and expertise. What happens if a key resource is out sick or leaves the organization? All of that expertise is gone, leaving a very big knowledge gap that will be hard to fill.


Third, develop a process that allows for rollbacks to prior configurations. That’s an important failsafe in case of a serious network error.




IT professionals need to understand organizational objectives to accomplish their own goals, which include optimizing and securing a consistently dependable network. Doing that is not just about technology. It also requires the ability to communicate freely with colleagues and agency leadership so that everyone is working toward the same goals.


CIOs must build a culture that is barrier-free and allows for regular interaction with other business leaders outside the technology realm. After all, isn’t that network or database that the IT staff manages directly tied to agency performance?


Having everything run perfectly all the time is an impossible dream. However, six nine’s of uptime is certainly achievable. All it takes is a little bit of simplification and planning, and a whole lot of technology and teamwork.


Find the full article on GNC.


Interested in this year’s cyber security survey? Go here.

As government agencies shift their focus to virtualization, automation and orchestration, cloud computing, and IT-as-a-Service, those who were once comfortable in their position as jacks-of-all-IT-trades are being forced to choose a new career paths to remain relevant.


Today, there’s very little room for “IT generalists.” A generalist is a manager who possesses limited knowledge across many domains. They may know how to tackle basic network and server issues, but may not understand how to design and deploy virtualization, cloud, or similar solutions that are becoming increasingly important for federal agencies.


But IT generalists can grow their careers and stay relevant. That hope lies in choosing between two different career paths: that of the “IT versatilist” or “IT specialist.”


The IT Versatilist


An IT versatilist is someone who is fluent in multiple IT domains. Versatilists have broadened their knowledgebase to include a deep understanding of several of today’s most buzzed-about technologies. Versatilist can provide their agencies with the expertise needed to architect and deliver a virtualized network, cloud-based services, and more.


Versatilists also have the opportunity to have to help their agencies move forward by mapping out a future course based on their familiarity surrounding the deployment of innovative and flexible solutions. This strategic support enhances their value in the eyes of senior managers.


The IT Specialist


Like versatilists, IT specialists have become increasingly valuable to agencies looking for expertise in cutting edge technologies. However, specialists focus on a single IT discipline, such as a specific application. For example, a specialist might have a very deep grasp of security or storage, but not necessarily expertise in other adjacent areas.


Still, specialists have become highly sought-after in their own right. A person who’s fluent in an extremely important area, like network security, will find themselves in-demand by agencies starved for security experts. This type of focus can nicely complement the well-rounded aspect that versatilists bring to the table.


Where does that leave the IT generalist?


Put simply – on the endangered list.


The government is making a major push toward greater network automation. Yes, this helps takes some items off the plates of IT administrators – but it also minimizes the government’s reliance on human interference. Those who have traditionally been “keeping the lights on” might be considered replaceable commodities in this type of environment.


If you’re an IT generalist, you’ll want to expand your horizons to ensure that you have a deep knowledge and expertise of IT constructs in at least one relevant area. Relevant disciplines will most likely center on things like containers, virtualization, data analytics, OpenStack, and other new technologies.


Training on these solutions will become essential, and you may need to train yourself. Attend seminars or webinars, scour educational books and online resources, and lean on vendors to provide additional insight and background into particular products and services.


Whatever the means, generalists must become familiar with the technologies and methodologies that are driving federal IT forward. If they don’t, they risk getting left out of future plans.


Find the full article on our partner DLT’s blog, TechnicallySpeaking.

The incorrect use of personal devices or the inadvertent corruption of mission-critical data by a government employee can turn out to be more than simple accidents. These activities can escalate into threats that can result in national security concerns.


These types of accidents happen more frequently than one might expect — and they’ve got government IT professionals worried, because one of the biggest concern continues to be threats from within.


In last year's cybersecurity survey, my company SolarWinds discovered that administrators are especially cognizant of the potential for fellow colleagues to make havoc — inducing mistakes. Yes, it’s true: government technology professionals are just as concerned about the person next to them making a mistake as they are of an external Anonymous-style group or a rogue hacker.


So, what are agencies doing to tackle internal mistakes? Primarily, they’re bolstering federal security policies with their own security policies for end users. This involves gathering intelligence and providing information and training to employees about possible entry points for attacks.


While this is a good initial approach, it’s not nearly enough.


The issue is the sheer volume of devices and data that are creating the mistakes in the first place. Unauthorized and unsecure devices could be compromising the network at any given time, without users even realizing it. Phishing attacks, accidental deletion or modification of critical data, and more have all become much more likely to occur.


Any monitoring of potential security issues should include the use of technology that allows IT administrators to pinpoint threats as they arise, so they may be addressed immediately and without damage.


Thankfully, there are a variety of best practices and tools that address these concerns and nicely complement the policies and training already in place, including:


  • Monitoring connections and devices on the network and maintaining logs of user activity to track user activities.
  • Identifying what is or was on the network by monitoring network performance for anomalies, tracking devices, offering network configuration and change management, managing IT assets, and monitoring IP addresses.
  • Implementing tools identified as critical to preventing accidental insider threats, such as those for identity and access management, internal threat detection and intelligence, intrusion detection and prevention, SIEM or log management, and Network Admission Control.


Our survey respondents called out each of these tools as useful in preventing insider threats. Together and separately, they can assist in isolating and targeting network anomalies. They can help IT professionals correlate a problem directly to a particular user. The software, combined with the policies and training, can help administrators attack issue before it goes from simple mistake to “Houston, we have a problem.”


The fact is, data that’s accidentally lost can easily become data that’s intentionally stolen. As such, you can’t afford to ignore accidental threats, because even the smallest error can turn into a very large problem.


Find the full article on Defense Systems.


Interested in this year’s cyber security survey? Go here.

All too often, federal IT personnel misconstrue software as being able to make their agency compliant with various regulations. It can’t – at least not by itself.


Certainly, software can help you achieve compliance, but it should only be viewed as a component of your efforts. True and complete compliance involves defining, implementing, monitoring, and auditing processes so that they adhere to the parameters that have been set forth within the regulations. First and foremost, compliance requires strategic planning, which depends on people and management skills. Software complements this by being a means to an end.


To illustrate, let’s examine some regulatory examples:


  • Federal Information Security Management Act (FISMA): FISMA’s requirements call for agencies to deploy multifaceted security approaches to ensure information is kept safe from unauthorized access, use, disclosure, disruption, modification, and destruction. Daily oversight can be supported by software that allows teams to be quickly alerted to potentially dangerous errors and events.


  • Federal Risk and Authorization Management Program (FedRAMP): FedRAMP may be primarily focused on cloud service providers, but agencies have a role to ensure their providers are FedRAMP compliant, and to continually “assess, authorize and continuously monitor security controls that are the responsibility of the agency”. As such, FedRAMP calls for a combination of hands-on processes and technology.


  • Health Insurance Portability and Accountability Act (HIPAA): The response to HIPAA has typically centered on the use of electronic health records, but the Act requires blanket coverage that goes well beyond technology use. As such, healthcare workers need to be conscious of how patient information is shared and displayed.


  • Defense Information Systems Agency Security Technical Implementation Guides (STIGs): The STIGs provide guidelines for locking down potentially vulnerable information systems and software. They are updated as new threats arise. It’s up to federal IT managers to closely follow the STIGs to ensure the software they’re using is not only secure, but working to protect their systems.


Particular types of software can significantly augment the people and processes that support your compliance efforts, so take a closer look at the following tools:


  • Event and Information Management tracks events as they occur on your network and automatically alerts you to suspicious or problematic activity. This type of software uses intelligent analysis to identify events that are inconsistent with predetermined compliant behaviors, and is intelligent enough to issue alerts before violations occur.


  • Configuration Management allows for the configuration and standardization of routers, firewalls, and switches to ensure compliance. This type of software can also be useful in identifying potential issues that might adversely effect compliance before they come to pass.


  • Patch Management is critical for closing known vulnerabilities before they can be exploited. It can be very handy in helping your organization maintain compliance with regards to security and ensuring that all operating systems and applications are updated.


Each of the aforementioned types of software can form a collective safety net for FISMA compliance and serve as a critical component of a security plan, but they can’t be the only component if you’re to achieve your compliance goals. As the old saying goes, the rest is up to you.


Find the full article on our partner DLT’s blog, TechnicallySpeaking.

When a mission-critical application experiences an outage or severe performance degradation, the pressure on the agency and its information technology (IT) contractors to find and fix the problem quickly can be immense. Narrowing down the root cause of the problem wherever it exists within the application stack (appstack) and enabling the appropriate IT specialists to quickly address the underlying problem is essential.


This post outlines how taking a holistic view of the appstack and optimizing visibility into the entire IT environment—applications, storage, virtual machines, databases and more—is the key to maintaining healthy applications, and how the right monitoring tools can quickly identify and tackle the problems before they become serious performance and security threats.


Know the impact


Your IT infrastructure is made up of a complex web of servers, networks and databases, among other things; and troubleshooting can be tricky. But an end user with a problem only knows that he or she can’t accomplish his or her task.


Take a holistic view of monitoring


Using individual monitoring tools for appstack issues is inefficient. Wouldn’t it be more effective if you already had a narrowed-down area in which to look for a problem?


Holistic monitoring prevents the “where’s the problem?” issue by keeping an eye on the entire appstack, pulling information from each individual monitoring tool for a high-level view of your systems’ health. High-level monitoring tools can be checked quickly and efficiently without diving into individual tools, and they tie together data from in-depth tools to reach conclusions and identify problems across multiple areas.


With a tool that provides broad, high-level visibility of the status of all layers of the appstack, IT professionals can quickly get an interdisciplinary look at different aspects of the infrastructure and how configurations or performance of various components have recently changed.


Extend the view to security


Another advantage of holistic application monitoring is that it gives visibility into both performance and security. A good holistic monitoring tool talks to all your different firewalls, intrusion detection systems and security-focused monitoring tools. It collects log data and correlates and analyzes it to give you visibility into performance and security issues as they’re happening.


Prioritize monitoring for maximum security


1. Understand what you’re trying to secure. The starting point in every system prioritization is to choose an end goal. What aspect of your appstack is the most important to secure?


Come up with a prioritized list and find out how each priority area is being secured, the technologies being used and the existing monitoring.


2. Use best practices for monitoring policies. Tools need to be checked regularly. Monitoring tools are only as good as their results, and performance and security issues could be slipping past you.


Be sure to set up alerts in each monitoring tool as well as running a holistic monitoring tool. This ensures that your IT pros are immediately made aware of issues with individual components of your appstack in addition to the overall insights offered by the holistic tool.


3. Don’t sacrifice your deep-dive specialized tools. Keep in mind that holistic monitoring doesn’t eliminate the need for your existing, individual monitoring tools. It’s good to have a holistic tool with overall visibility, but you’ll also need the more in-depth tools for deeper dives when identifying problem areas.


Find the full article on Signal.

As federal technology environments become more complex, the processes and practices used to monitor those environments must evolve to stay ahead of -- and mitigate -- potential risks and challenges.


Network monitoring is one of the core IT management processes that demands focus and attention in order to be effective. In fact, there are five characteristics of advanced network monitoring that signal a forward-looking, sophisticated solution:


  1. Dependency-aware network monitoring
  2. Intelligent alerting systems
  3. Capacity forecasting
  4. Dynamic network mapping
  5. Application-aware network performance


It might be time to start thinking about evolving your monitoring solution to keep up.


1. Dependency-aware network monitoring


Network monitoring is a relatively basic function, sending status pings from devices on your agency’s network so you know they’re operational. Some solutions offer a little bit more with the ability to see connectivity -- which devices are connected to each other.

A sophisticated network monitoring system, however, provides all dependency information: what’s connected, network topology, device dependencies and routing protocols. This type of solution then takes that dependency information and builds a theoretical picture of the health of your agency’s network to help you effectively prioritize network alerts.


2. Intelligent alerting system


The key to implementing an advanced network monitoring solution is having an intelligent alerting system that triggers alerts based on dynamic baselines calculated from historical data. An alerting system that understands the dependencies among devices can significantly reduce the number of alerts being escalated.

Intelligent alerting will also allow an organization to “tune” alerts so that admins get only one ticket when there is a storm of similar events, or that alerts are sent only after a condition has persisted for a significant period of time.


3. Capacity forecasting


An agency wide view of utilization for key metrics, including bandwidth, disk space, CPU and RAM, plays two very important roles in capacity forecasting:


1.    When you have a baseline, you can see how far above or below normal the network is functioning; you can see trends over time and can be prepared for changes on your network.

2.    Because procurement can be a lengthy process, having the ability to forecast capacity requirements months in advance allows you to have a solution in place when the capacity is needed.


4. Dynamic network mapping


Dynamic network mapping allows you to take dependency information one step further and display it on a single screen, with interactive, dynamic maps that can display link utilization, device performance metrics, automated geolocation and wireless heat maps.


5. Application-aware network performance


Users often blame the application, but is it really the application? Application-aware network performance monitoring collects information on individual applications as well as network data and correlates the two to determine what is causing an issue. You’ll be able to see if it is the application itself causing the issue or if there is a problem on the network.


As I mentioned, federal technology environments are getting more complex; at the same time, budgets remain tight. Evolving your network monitoring solution will help with both of these challenges -- it will keep you ahead of the technology curve and help meet budget and forecasting challenges.


Find the full article on GCN.

Forget about writing a letter to your congressman – now, people are using tools like the web, email, and social media to have their voices heard on the state, local, and federal levels. Forward-looking agencies and politicians are even embracing crowdsourcing as a way to solicit feedback and innovative ideas for improving the government.


Much of this is due to the ubiquity of mobile devices. People are used to being able do just about everything with a smartphone or tablet, from collaborating with their colleagues wherever they may be, to ordering a pizza with a couple of quick swipes.


Citizens expect their interactions with the government to be just as satisfying and simple – but, unfortunately, recent data indicates that this has not been the case. According to a January 2015 report by the American Customer Satisfaction Index, citizen satisfaction with federal government services continued to decline in 2014. This, despite Cross-Agency Priority goals that state federal agencies are to “utilize technology to improve the customer experience.”


Open data initiatives can help solve these issues, but efforts to institute these initiatives are creating new and different challenges for agency IT pros.


  • First, they must design services that allow members of the electorate to easily access information and interact with their governments using any type of device.
  • Then, they must monitor these services to ensure they continue to provide users with optimal experiences.


Those who wish to avoid the wrath of the citizenry would do well to add automated end-user monitoring to their IT tool bag. End-user monitoring allows agency IT managers to continuously monitor the user experience without having to manually check to see if a website or portal is functioning properly. It can help ensure that applications and sites remain problem-free – and enhance a government’s relationship with its citizens.


There are three types of end-user monitoring solutions IT professionals can use. They work together to identify and prevent potential problems with user-facing applications and websites, though each goes about it a bit differently.


First, there is web performance monitoring, which can proactively identify slow or non-performing websites that could hamper the user experience. Automated web performance monitoring tools can also report on load-times of page elements so that administrators can adjust and fix slow-loading pages accordingly.


Synthetic end-user monitoring (SEUM), allows IT administrators to run simulated tests on different possible scenarios to anticipate the outcome of certain events. For example, in the days leading up to an election or critical vote on the hill, agency IT professionals may wish to test certain applications to ensure they can handle spikes in traffic. Depending on the results, managers can make adjustments accordingly to handle the influx.


Finally, real-time end user monitoring effectively complements its synthetic partner. It is a passive monitoring process that—unlike SEUM which uses simulated data—gathers actual performance data as end-users are visiting and interacting with the web application in real time.


Today, governments are becoming increasingly like businesses. They’re trying to become more agile and responsive, and are committed to innovation. They’re also looking for ways to better service their customers. The potent combination of synthetic, real-time, and web performance monitoring can help them achieve all of these goals by greatly enhancing end-user satisfaction and overall citizen engagement.


Find the full article on GCN

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.