Showing results for 
Search instead for 
Did you mean: 
Create Post

Maintaining a Secure Environment: Monitoring Beyond the Log File

Level 10

This blog series has been all about taking a big step back and reviewing your ecosystem. What do you need to achieve? What are the organization’s goals and mandates? What assets are in play? Are best practices and industry recommendations in place? Am I making the best use of existing infrastructure? The more questions asked and answered, the more likely you’re to build something securable without ignoring business needs and compromising usability. You also created a baseline to define a normal working environment.

There’s no such thing as a 100% protected network. Threats evolve daily. If you can’t block every attack, the next best thing is detecting when something abnormal is occurring. Anomaly detection requires the deployment of methodologies beyond the capabilities of the event logs generated by the elements on the network. Collecting information about network events has long been essential to providing a record of activities related to accounting, billing, compliance, SLAs, forensics, and other requirements. Vendors have provided data in standardized forms such as Syslog, as well as device specific formats. These outputs are then analyzed to provide a starting point for business-related planning, security breach identification and remediation, and many other outcomes.

In this blog, I’ll review different analysis methods you can use to detect threats and performance issues based on the collection of event log data from any or all systems in the ecosystem.

Passive/deterministic traffic analysis: Based on rule and signature-based detection, passive traffic analysis continually monitors traffic for protocol anomalies, known threats, and known behaviors. Examples include tunneled protocols such as IRC commands within ICMP traffic, use of non-standard ports and protocol field values, and inspecting application-layer traffic to observe unique application attributes and behaviors to identify operating system platforms and their potential vulnerabilities.

Correlating threat information from intrusion prevention systems and firewalls with actual user identities from identity management systems allows security professionals to identify breaches of policy and fraudulent activity more accurately within the internal network.

Traffic flow patterns and behavioral analysis: Capture and analysis using techniques based on flow data. Although some formats of flow data are specific to one vendor or another, most include traffic attributes with information about what systems are communicating, where the communications are coming from and headed to, and in what direction the traffic is moving. Although full-packet inspection devices are a critical part of the security infrastructure, they’re not designed to monitor all traffic between all hosts communicating within the network interior. Behavior-based analysis, as provided by flow analysis systems, is particularly useful for detecting traffic patterns associated with malware.

Flow analysis is also useful for specialized devices like multifunction printers, point-of-sale (POS) terminals, automated teller machines (ATMs), and other Internet of Things (IoT) devices. These systems rarely accommodate endpoint security agents, so techniques are needed to compare actions to predictable patterns of communication. Encrypted communications are yet another application for flow and behavioral analysis. Increasingly, command-and-control traffic between a malicious server and a compromised endpoint is encrypted to avoid detection. Behavioral analysis can be used for detecting threats based on the characteristics of communications, not the contents. For example, an internal host is baselined as usually communicating only with internal servers, but it suddenly begins communicating with an external server and transferring large amounts of data.

Network Performance Data: This data is most often used for performance and uptime monitoring and maintenance, but it can also be leveraged for security purposes. For example, availability of Voice over IP (VoIP) networks is critical, because any interruptions may cripple telephone service in a business. CPU and system resource pressure may indicate a DoS attack.

Statistical Analysis and Machine Learning: Allows us to determine possible anomalies based on how threats are predicted to be instantiated. This involves consuming and analyzing large volumes of data using specialized systems and applications for predictive analytics, data mining, forecasting, and optimization. For example, a statistics-based method might detect anomalous behavior, such as higher-than-normal traffic between a server and a desktop. This could indicate a suspicious data dump. A machine learning-based classifier might detect patterns of traffic previously seen with malware.

Deriving correlated, high fidelity outputs from large amounts of event data has seeded the need for different methods of its analysis. The large number of solutions and vendors in the SIEM, MSSP, and MDR spaces indicates how important event ingest and correlation has become in the evolving threat landscape as organizations seek a full view of their networks from a monitoring and attack mitigation standpoint.

Hopefully this blog series has been a catalyst for discussions and reviews. Many of you face challenges trying to get management to understand the need for formal reviews and documentation. Presenting data on real-world breaches and their ramifications may be the best way to get attention, as is reminding decision makers of their biggest enemy: complacency.


Thanks for the article.


Interesting article, thanks.

Level 14

Thanks for the article. 

Level 16

Thanks for the write up

Level 13

It's now just about the log files.  Thanks for the info.

Level 8

Great Article

Level 8

great thanks


I've noticed that as the various protection tools get more "machine learning" embedded, the more opaque they get in terms of seeing what they do to determine what is a threat.

That means its much more common to see a proliferation of white listing and various "allow" methods to sidestep the fact we can't easily see why something is being blocked.

Add the multiple vendors involved and it quickly becomes are horrible mess.

Although NOT using proper security can cause a business to fail, the cost of implementing the right security properly is still daunting.

Capturing and analyzing all data flows (North-South, East-West) in our organization required serious investment in Gigamon and Cisco ISE and AMP and Splunk, and thousands of hours of configuration & troubleshooting.  That doesn't touch the required enterprise-wide / global onboarding standard / solution and its implementation.  But those are coming, too--at extra cost.

NAC is a great idea--just like the mice agreeing that the cat needs to have a bell put on its neck for their safety.  Implementing the right fix can be very expensive.

Level 12

thanks for the article

Level 11

Thanks for the article.

Level 8

very helpful, thanks

Level 13

Thanks for the article