Skip navigation

Have you ever wondered what the actual cost of security breaches are?

 

Target, due to its mid-December, 2013, breach can certainly be a case study. As of now, Target's profits are down 46% and down by 5.3% in revenue. Their stock is still down by 5% as well.

 

While this tells me it's an excellent time to shop at Target, why aren't more people shopping there? And why did Target take such a hit in profits?

 

Part of this no doubt involves the timing of the breach. Rumors started to filter out in the middle of the holiday shopping season followed by the announcement.

 

Another part may be related to a loss of trust. Since rumors leaked out before Target made an official announcement, consumers may not consider Target to be as trustworthy a source anymore. Granted they may have wanted to wait until after the holiday shopping season was over, but the leak made that impossible. Since the rumors started before Target made the announcement, it looked like the company was trying to hide the damage instead.

 

Of course, after they announced the breach, they later had to amended the magnitude of the breach. Not only were credit card numbers stolen, but personal information was also stolen. Since privacy is a hot topic, people may continue to be wary about shopping at Target until some more time has passed.

 

Here are some potential take-aways from the Target breach:

  • Hackers seem to like to try for a mid-December attack. The massive 2006 attack also happened in mid-December.
  • If someone leaks that you've been hacked, disclose immediately and with as much detail as feasible.
  • When you finally tell the public about a breach, also include what you're doing to circumvent future attacks.
  • It's probably a good idea to educate the public about what the breach means for them and their continued use of your product.

A corollary of the maxim on historical memory--those who forget mistakes are doomed to repeat them--is that the best way to go forward is to look back--if not at missed alternatives, then at least at the path taken.

 

Will 2014 be a turning point in the future of the internet? Let's review a few points of reference to put the question in context for those who someday might look back.

 

First point: As a context for Facebook's recent acquisition of the WhatsApp messaging service for $19 billion, CEO Mark Zuckerberg tells us in a white paper that his outlook on adding another 5 billion users to Facebook assumes connectivity for global citizens should be "a human right". "The knowledge economy is the future," says Zuckerberg. "By bringing everyone online, we’ll not only improve billions of lives, but we’ll also improve our own as we benefit from the ideas and productivity they contribute to the world."

 

Delivering remarks to the Mobile World Congress in Barcelona this month, Zuckerberg describes his plan as providing "a dial tone for the internet"; guaranteed access to carrier infrastructure that enables meaningful connectivity in the knowledge economy over a phone. If you have a phone, then you'll have basic access to the internet in terms of "text-based services [(messaging, social networks, search)] and very simple apps like weather".

 

Second point: In a 2014 DC Court of Appeals ruling on Verizon's case against the Federal Communications Commission, Judge David S.Tatel wrote that:

Given that the commission has chosen to classify broadband providers in a manner that exempts them from treatment as common carriers, the Communications Act expressly prohibits the commission from nonetheless regulating them as such.

 

The FCC has three basic rules for guaranteeing an open internet: transparency, no blocking, and no reasonable discrimination. These rules apply to all carriers. Since the FCC defines broadband providers as dealing in "information services" Tatel's majority opinion suspends the open internet rules for carriers that provide broadband services.

 

Of course the FCC need only reclassify broadband providers as carriers and the open internet rules would again immediately apply. So far they have not done it. And in the meantime, carriers are aggressively moving to exploit the difference between "common carrier" and "broadband provider," establishing different classes of broadband traffic with different costs. The big elephant in the room is the proposed merger of Time Warner Cable and Comcast; it would create a company that services a full third (33 million) of broadband subscribers in the US; aggressive commercializing of the rate at which data flows to and from such a large base of consumers would be almost impossible to prevent without very clear FCC regulations.

 

Since robber baron maneuvers of carriers have very wide and fundamental implications for access to the knowledge economy, one would expect Zuckerberg and the internet.org would oppose them. Yet in talking about the program to inclusively guarantee the human right of connectivity to current internet users and the billions of currently unwired people, Zuckerberg promotes the idea of up-selling data services to the minimally connected. A (cynical?) image of Zuckerberg's plan could be a large impoverished crowd of people given a little space before an enormous bakery window that pushes out intoxicating wafts to them through tiny holes.

 

Third point: Remember that scene in Minority Report (2002) in which a commercialized public pedestrian passage personally appeals to the main character as he passes through? While the scene's face recogntion technology may imply a ubiquitous computing grid, we can't tell how that grid of the imagined world of 2054 combines public and private services. Does the consumer, John Anderton, receive personalized ads based on data services to which he subscribes? Or does he receive them because vendors pay the owner of the computing infrastructure for Anderton's attention, and Anderton himself doesn't pay to block them?

 

In either case, that Anderton wears no computing device that could refuse the probing of face/iris recognition devices or otherwise negotiate the ads suggests an important implausibility in what Minority Report shows us. Though Google Glass may encounter resistance to its wearable computing technology, our ever-increasingly mediated culture foretells that resistance to wearables (and implants too in the long run) will be futile.

 

Since envisioning is the first step in making new technology real, the blurry edges and omissions of popular depictions of the future have value for choices of innovation in the present. And it may not be just a darkly ironic open question as to the details of the computing grid and user recognition technology that might be in play for John Andertons in the actual 2054.

Summary: How big the botnet problem is, how it can affect your network and how traffic and log analysis can help slay the botnets in your network.

 

As a network administrator, you may have implemented security measures to stop DDoS attacks and upped the ante against malware. You may have your firewalls, ACLs, and Intrusion detection and prevention systems in place to protect your network from attacks originating from the Internet.

 

But have you thought about a scenario where your network is hosting a DDoS attack or sending out spam? Which means your network is contributing to an attack and not under attack.

 

That can happen if the computers in your network have been compromised and are part of a botnet. Other than possible legal issues and blacklisting of your public IP addresses, you may also incur huge bandwidth charges because bots in your network are sending countless spam or taking part in high traffic DDoS attacks. For example, there was this record-breaking DDoS attack that reached 400Gbps at its peak!

 

What is a Botnet?


A botnet is a network of compromised computers called bots which are controlled by a bot master through a Command and Control Center (C&C center). Bots can be remotely configured to forward transmissions that can perform DDoS attacks, email spamming, click fraud, and malware distribution. The number of hosts or bots in a botnet can range from a few thousands to even millions (Zeus, Conficker or Mariposa).

 

The C&C center is the interface through which the bot owner manages his bots, mostly from behind a Tor and the communication methods used include IRC channels, peer-to-peer, social media and now, even the cloud.  Statistics show that each day, more than a 1000 DDoS attacks1 occur and between 40 and 80 billion spam emails2 are sent. Botnets are responsible for almost all DDoS attacks and more than 80% of the total spam sent worldwide3.

 

Detecting Botnets through Analytics:


To stop bots, you will first have to detect them. Bots can lie dormant for months together and become active when it has to take part in a DDoS attack. That doesn’t mean bots are undetectable. It is possible to detect bots by analyzing event logs and network traffic behavior. So, let’s take a look at some common bot behavior that can help with its detection.

 

IRC is one of the methods used by the C&C center to communicate with its botnet and the communication is kept as short as possible to prevent noticeable impact on the network. If you cannot block IRC in your network, analyze your network traffic and check for port-protocol combinations that matches with IRC traffic. And if you see multiple short sessions of IRC, make sure to scan for botnet presence in your network. You can also scan your system logs to find if any new programs have been installed or if there has been unexpected creation, modification or deletion of files and check for modification of registry entries. If any of it smells IRC, you know what you should look for next.

 

A C&C center is what controls the botnet. If the C&C center is taken down, the botnet itself is useless. For resilience, a C&C center has 2 options - one, known as Fast flux, involves constantly changing the IP address associated with the FQDN which hosts the C&C center. The other, which is Domain flux, creates multiple FQDNs each day and allocates it to the IP address of the C&C center. Due to this, the bots will have to do a number of DNS lookups to locate its C&C center. Analyze egress/outbound traffic from your network or logs related to DNS and if you find more DNS lookups than expected or DNS lookups for weird domain names, it could be a bot.

 

Like malware, bots too search for other vulnerable hosts to infect. To find open ports on a host, a burst of packets with the SYN flag set is send either to a single host on multiple ports or to multiple hosts on a single port. If the target port is open, the system responds with a SYN-ACK packet. So, if you see too many conversations from a host to other hosts with the SYN flag set or an increase in the packet count but no major increase in traffic volume, you are possibly looking at a port scan by the bots.

 

Remember the statistic about almost 90% of the total spam being sent by botnets? Something worse than receiving spam is being slammed with a huge bandwidth bill for spam emails sent by bots from your network and even possible blacklisting of your IP address along with other legal troubles. Because spam email has to be sent from your network to the outside, SMTP will have to be leveraged on. If you see an unexpectedly high volume of SMTP traffic originating from your network to the outside, especially from random endpoints, bad news - you are hosting a spam bot!

 

SYN flooding is one of the methods bots use to carry out a DDoS attack. Bots send SYN messages with a spoofed source IP address to its target so that the server’s ACK message never reaches the original source. The server keeps the connection open waiting for a reply while it receives more SYN messages all leading to a DoS. Watch your outbound traffic for conversations with only the SYN flag set but no return conversation with an ACK flag. And check for egress network traffic with the source having invalid IP addresses, such as an IANA reserved IP or a broadcast IP. Both these behaviors can be due to bots taking part in a DDoS attack.

 

While the patterns we discussed here can also be genuine IP traffic behavior, keeping an eye open for anything out of the ordinary and comparing that information with your baseline data or normal network behavior will help you minimize false positives.

 

There are a number of mature, easy to use options for network behavior analysis such as packet capture, flow analysis with technologies like NetFlow or with an SIEM tool. Options such as NetFlow and syslog exports are already built into your routers, switches and systems. You only have to turn it on and use reporting or SIEM tools such as a NetFlow analyzer or log analyzer. Such solutions are cost-effective and does not need complex configuration or set up.  So start your traffic and log analysis to slay those botnets.

 

 

 

 

Reference:

  1. Number of DDoS attacks per day as per the Arbor ATLAS report: http://atlas.arbor.net/summary/dos
  2. Guardian claims the peak was at 200 billion: http://www.guardian.co.uk/technology/2011/jan/10/email-spam-record-activity
  3. 88% of all spam emails are sent by botnets: http://www.techrepublic.com/blog/10-things/the-top-10-spam-botnets-new-and-improved/

http://searchsecuritychannel.techtarget.com/feature/Virtual-honeypots-Tracking-botnets

One of the most common discussions among network administrators is about anticipating and preventing issues even before users notice them. This blog will help you understand the importance of baselining in network monitoring, and how beneficial it can be to get steady network performance.

 

What is Network Baselining?

Network Baselines are ideal performance metrics obtained by measuring your network for a particular time period. Baseline statistics provide a way to validate your current network status by determining recommended performance standards, hence, helping admins to find the “normal” operating level of network devices.

 

Why should you baseline your network?

Real-time analysis on different types of network properties such as device utilization, CPU/memory usage, connectivity, resource performance, etc. can formulate baselines which in turn help network managers to forecast needs and optimize network performance. By understanding what the normal performance characteristics of a device are, you can identify potential problems and understand vulnerabilities in your network when the device performance deviates.  For example, suppose the CPU utilization on a switch or router has a normal operating range of 20%.  If you suddenly see a spike to 80%, this could be sign that something in your network has changed and needs further investigation.

Network Baselines also act as a reference point while troubleshooting issues, and will help you understand network patterns and usage trends. Network Baselines provide valuable data to evaluate and support decision makers on existing policies, network compliance, and future network upgrades.

 

How are network baselines calculated?

In small environments, network admins manually calculate baselines by creating an inventory of devices to be monitored and define what data needs to polled. They create statistics based on polled information for establishing thresholds and use simple network monitoring tools to create alerts based on these baseline thresholds to consistently monitor the network for any imminent problems. Manual baselining processes, however, aren’t feasible for larger enterprise networks where hundreds of devices need to be monitored. This is where automated monitoring tools with built-in baselining become valuable.

 

How can Advanced Network Monitoring tools help?

Advanced network monitoring tools that include baselining can dynamically calculate network baselines from historic network performance data. Using baseline tests, admins can find standard threshold metrics from network devices. These metrics in monitoring tools will allow you to calculate warning and critical thresholds according to baselines which, in turn, can be used to set more accurate alert levels. This will help admins to quickly assess the difference between a device that is performing well and a device that is about to fail.

x.jpg
Network Baselines are also very useful in planning for hardware/service upgrades, creating network policies based on user requirements, and defining compliance standards.

wesley1.jpg

 

Welcome to this month’s IT Blogger Spotlight. This time around we’re catching up with Wesley David, who blogs over at The Nubby Admin. Wesley is also an avid Twitterer, where he goes by @Nonapeptide.

 

Without further ado…

 

SWI: So Wesley, how long have you been blogging for and what got you started?

 

WD: A friend of mine had a Drupal blog for his own ramblings, and I saw it and was intrigued. I asked for an account on his CMS and voilà! That was in 2007. However my writing in general goes back much further. I’ve always liked to write, specifically humorous anecdotes. Writing is just a natural thing for me. No matter what I do, I’m going to end up writing about it somewhere.

 

As I sunk deeper and deeper into IT as a profession, my writing tendency came to the forefront. I was often the only one in the department interested in and capable of writing thorough documentation. Later I realized that the best way for me to remember solutions to problems I had solved was to write it down. I turned to blogging as less of a platform for personal ranting and more of a scrapbook where I can keep answers to problems that I’ve stumbled over. Often I’ll search Google for an answer and find it in an old post that I forgot about on my own blog!

 

SWI: I have to ask, what’s the story behind “The Nubby Admin”?

 

WD: Well, The Nubby Admin is, to the best of my recollection, my third IT blog. The first being an account on my friend’s Drupal site. The second was a Blogspot blog that picked up momentum over about a year. After several years of blogging on platforms that I didn't host or wield deep control over, I decided to register a domain for my own little slice of the internet. I named it “The Nubby Admin” to avoid taking myself too seriously. I’m not even a noob. I’m a nublet. I’m a fraction of a noob.

 

SWI: Well, you may consider yourself a nublet, but your blogging says otherwise. What’s your favorite topic to blog about?

 

WD: I most enjoy writing about solutions to my daily problems. I’m not big on editorials, exposés or opinion pieces. I find that there’s usually too many facets to any given story for me to feel comfortable with holding a strong opinion.

 

Virtually every time I bump into a problem, I open up Textedit, Notepad, vi or something similar and type the following:

 

Solving XYZ Problem

My Symptoms:

My Solution:

The Long Story:

 

I’ll first carefully write down the symptoms. Often that will make me consider assumptions about the problem. For example, I might think, “Sure the SSL connection is being rejected, but is every available version of SSL being rejected? Hmm…let’s go try to connect to the service using all possible ciphers. Wait, how can I easily cipher scan a SSL/TLS connection? Let’s go find a tool…” Then I’ll open up a second document and title it, “How to easily scan what SSL/TLS versions are available on a remote connection,” and solve that problem and write about it in detail as I go, too!

 

I find that it’s confidence-building to write those words “My solution:” even though I don’t have one yet. I then move on to “The Long Story” and carefully write everything I do in the process of troubleshooting. That way I have a detailed history of what I’ve seen and done. By the end of the journey, I usually have a solution and several documents that detail how I solved various problems all on the same trajectory to solve the original problem. With minimal editorial effort, I then clean the documents up and turn them into blog posts.

 

Finally, I do enjoy writing the occasional humor piece. A meme, a quizzical photo, a humorous anecdote. It’s nice to break up the stone cold troubleshooting with a few chuckles.

 

SWI: OK, so which type of posts do you find end up being the most popular?

 

WD: My most popular short-term posts are humor. For example, “Don’t Eat Too Much Three Bean Salad. The Server Will Crash.” A very recent example was taking my frustration with cloud vendors and memeifying it Pulp Fiction style. That one blew up pretty big across social media. Everyone loves to laugh, and a humorous piece is often more accessible to people even if they don’t understand the specific subject matter.

 

However, my most popular long-term posts are ones that solve very common problems. After approximately four years, I’m still getting dozens of people per day coming to my site for a solution to a common VPN connection problem that was introduced in Windows 7. I consistently out-rank vendor sites for some error message keywords. I used to work with a guy who was really into SEO (white hat, legit stuff), and I also find Google’s Matt Cutts to be a fascinating teacher concerning how Google’s search engine works. As a result, I’ve learned a few practical tips on how to get blog posts to rank high for your chosen keywords. And why wouldn’t I want to rank high when I’ve found a true solution and documented it thoroughly? Most search results for error messages come back with sketchy forum posts that tell you eight different things, half of which don’t work and the other half are misguided or downright dangerous.

 

SWI: It’s great that you’re able to so fluidly combine your day-to-day work with your blogging. Speaking of, what’s your day job?

 

I’m currently the owner of my own consultancy. It’s a one-man LLC formed in the U.S., specifically the state of Arizona, but I do have subcontractors that can take care of specialist topics or cover spill-over work. I’m a true generalist, consistently working with Windows and Linux, wired and wireless networks, databases, a little bit of programming and a whole lot of documentation writing. My favorite types of work typically involve web-based workloads and systems that have automation-based problems to solve.

 

Nevertheless, after four years of being a combination IT pro, marketer, sales person, accountant and tax specialist, I’m packing it up and looking to re-join a team. I’m transitioning away from client work and interviewing for positions and companies that tend to focus on the webops, whatever-as-a-service style of infrastructure. So if anyone out there is hiring as of the posting of this interview…

 

SWI: On that note, let’s give those potential employers out there some more food for thought. Tell me, how did you get into IT?

 

I’ve been a “computer person” since the mid-80s. My family was never, to my knowledge, without a computer in the house. However I had different passions as a pre-teen and teenager, usually involving the military, law enforcement or emergency medicine. However I never landed on anything in particular, and my parents, after watching my indecision bleed into my very early 20s, handed me a book on desktop support for the relatively new Windows XP operating system. My continued rent-free status at the time was the leverage used in this discussion, and the rest is history!

 

I also discovered that I truly love the problem solving nature of system administration, especially by automating solutions so you never have to worry about them again. I like to make people happy, specifically by relieving annoying burdens that plague their workdays. That was approximately ten years ago and I hope there’s many more years of problem solving in my future.

 

SWI: What are some of your favorite tools?

 

My most favorite tools are built-in tools because you can rely on them in almost any situation. I hate being strongly reliant on non-standard, third-party tools because if you ever move to an environment that doesn’t use that one specific tool, then you’re lost and adrift. I like standards and repeatability. I like syslog, Windows Event Viewer, SNMP, the /proc/ virtual filesystem, grep, powershell…You get the idea.

 

Nevertheless, my favorite things to use outside of those vanilla, baked-in tools are currently Chef to automate the deployment of servers and Ruby for tool making and automation. Oh and a giant whiteboard to use for kanban!

 

SWI: I know your day job and blogging keep you busy, but when you do find spare time, what are some of your favorite hobbies?

 

I have clinical symptoms of both ADD and OCD (TIC) so my hobbies this year are not what they were last year, and my hobbies next year will not be what they are this year. I have a one to three year lifespan for the hobbies that actually take hold of me, in contrast to the week-long obsessions that quickly fade and get forgotten within days.

 

Having said that, my current major hobby is cycling. I’m working up to a two and a half minute mile on flat land with a loaded touring bike, but also want to ride a century sometime in the next year. I’m using an old Fuji while building a Surly Long Haul Trucker from the ground up, currently stalled at the head tube while I wait to purchase a reamer/facer and cup press.

 

I’ve had a lifelong hobby that has miraculously survived my gnat-like attention span: Origami! To my recollection, I’m currently in my 26th or 27th year of fiddling with origami. As a result I also collect fine papers, usually from Japan.

 

Past hobbies that I learned from and have fond memories of, even though I no longer spend time on them include: Chess, gardening, golf, attaining a private pilot’s license, astronomy and model rocketry.

 

Hobbies that my “OH SHINY!” brain is eyeballing for the future include: Frisbee golf, survivalist camping, handguns/rifles, hang gliding and minimalist traveling.

 

SWI: OK, last question: What are the most significant trends you’re seeing in the IT industry right now and what’s to come?

 

At the current moment the trendy-kewl 3hawt5me topic is the so-called “devops” movement. The term and the culture is in disarray, though, and many people are confused as to what the term and concepts mean. It’s not “devs doing ops” as that’s a complete disaster. It’s not “no ops” as that’s a misnomer in many ways. It’s not “automate all the things!” or “ops knowing code” as those two are just part of being a good system administrator, and always has been going back for many decades.

 

Learning from the people formative in the fledgling topic, devops is simply a matter of doing infrastructure and development in such a way that allows rapid iteration through testing, QA and production code. Fail fast and fix fast, if you will. That implies infrastructure regeneracy, idempotency, and fungibility, as well as a host of other silly buzzwords. It’s not a concept or model that fits everywhere, especially with simple nuts-and-bolts IT. Getting people to understand, “You don’t need a ‘devop’ to make your Exchange cluster resilient and quickly recoverable, you just need a good sysadmin,” is hard nowadays. I think the days of button-mashing, GUI-bound sysadmins has broken the long, rich history of sysadmins that knew their code and never once thought there were two separate topic domains. As a result, what was once simply considered being an excellent sysop is now considered being a “devop,” but unfortunately that now includes all the silly misunderstandings of the new buzzword.

 

What’s to come, I believe, is operations teams reclaiming the former notion of system administration. That includes lots more scripting and coding to create automation that was once the near-universal norm. Also, as systems and applications can now spread out more easily across multiple nodes, the need for automation, monitoring and remediation tools is growing in size and complexity, but many of them require basic proficiency with code, which is great. MOAR CODE!

einsigestern

Your Name Here

Posted by einsigestern Feb 25, 2014

Do you manage firewalls and other network devices? Does it sometimes seem that swimming the Great Barrier Reef would be safer? Well, hop out of that shark cage. SolarWinds has the solution that will make you say, "This is so easy, I must be dreaming." Well, it is, and you're not. So, before you put on that Speedo, check out Firewall Security Manager.


PCI DSS and Rule Changes

In Firewall Security Manager (FSM), you can apply rule and traffic flow analysis to check and report on firewall compliance with the Payment Card Industry Data Security Standards (PCI DSS) requirements. FSM reports show which PCI DSS control items failed the audit, and the rules that caused the failure.

After your audit, you can track all the rule changes. Business justification for the change can be recorded and displayed in your reports. You can even use rule change history in Rule Documentation reports to track down who changed the rule that borked your audit, then go take their Skittles.

Debug Traffic Flow enables you to investigate problems in firewall configurations that involve packet flow through a device. You can identify policies, security rules, NAT rules, routes, and implied rules that affect a packet as it zips through a firewall. FSM gives you the tools to explore and debug traffic flows between /24 subnets with up to five services.

Traveling Packets

Would you like to see how packets travel through Layer 3 devices? Packet Tracer identifies all routable paths from the source to the destination, including effects of ACLs, NATs, and routes along each path. You can check if a packet will reach the destination, while you enjoy your confiscated Skittles, and even check which devices and rules within the devices are allowing or blocking the packet.

Sometimes it would be great to have the capability to validate a migration from one firewall type to another. FSM compares their traffic flows and generates a validation report that shows specific differences in policy between two compared devices. For Cisco to Check Point firewalls, use the FSM step-by-step migration wizard.

Standard Naming

Don't you wish you had standardized object naming conventions across an entire set of devices? Sure you do. With FSM you can identify objects that have the same name but different definitions, or objects that have different names but the same definition. Automate finding and eliminating object definition conflicts across multiple firewalls. You can split object definitions into two or more separate definitions, combine object definitions, and apply other object restructuring changes. FSM generates scripts and uses the scripts to modify the existing configuration to use the new standard object definitions.

Wait, that's not all. Watch this blog for information on how to run FSM reports from Network Configuration Manager.

Orion Report Writer is a tool within Orion Network Performance Monitor (NPM) that allows users to format data and preview reports before displaying them. When running reports, sometimes it is not necessary or desirable to retrieve all the available information from a device. One question that has popped up is how to create a report that filters for certain days of the week. For example, rather than pulling a report that provides all information (24x7), to instead pull information that has been collected during normal business hours. This article concentrates on how to filter reports for specific times and days of the week. For more information on configuring reports in their entirety see Understanding Orion Report Writer.

 

The procedure for filtering is fairly simple and only involves making a few changes to the “Filter Results” section of the report. The following example shows a filter that returns information collected Monday thru Friday from 7 AM to 6 PM.


1.JPG


The report consist of two complex conditions. The first complex condition listed below consist of three statements grouped within an “all” condition. The first two filters within the “all” condition filter for the time of day. The filter spans from 7 AM to 6 PM. The 3rd statement filters for a device with the IP address of 10.110.68.115. Please note, rather than using an IP address in the filter, we can also customize it to use custom properties or other filtering techniques.


2.JPG


The second complex condition listed below consist of 5 statements grouped with an “any” condition. This condition states that if the report fall on any day of a normal work week (Monday thru Friday) to include it in the report. We can also convert the report to display weekends by changing the report to Saturday or Sunday.


3.JPG

10arrow.png

Did you ever stop to think that you need to manage your database management data* just as well as you do your "regular" organizational data? That means you should be practicing what you preach.  There's a heap of clichés I can use here…and I probably will.

 

We've talked about Questions, Data and Getting to Answers in this series.  Now let's talk about how we need to administer all that data and answers.  We give our IT customers all kinds of advice on just how to do that…and yet way too many of us show them that we want them to do as we say, not as we do.

 

How do you collect all your database and systems data?  Do you use dozens of custom scripts and store your administration data in spreadsheets, XML files, CSVs and random tables across various servers and systems?  How do you protect that data?  Do you know if it contains sensitive data? Do your corporate attorneys?  Do you know where your most up-to-date copy of your resume is, right now? You might need it in a hurry.

 

  1. Follow your own security advice. That means not running monitoring and other data collection solutions as SA or Admin.  Using role-based, LDAP, or other more professional approaches to security.  It means securing all the data, especially when your database administration data includes sensitive data.  This is especially true when you are capturing queries and parameters.
  2. Don't spread data all around.  You need a well-managed, perhaps centralized point to track where and what your database admin data is.  If it is stuck in spreadsheets, random tables, proprietary data stores, XML files, CSVx, etc. you are going to have a very costly and difficult time getting answers. 
  3. Collect what you need, only when you need it. Just because you CAN collect some data, doesn't mean you need to collect it, nor does it mean you need to collect it many times over.  Barring legislative mandates to collect data, are you sure you need to collect it? An example is using a Profiler trace for every single event, all the time, instead of the ones need when you need it. Which leads us to the next tip…. 
  4. Know what is worth collecting. This means you need to understand what each systems meta data item represents. Is it all the statistics since the service last restarted? Since it was turned on? Just for the last 2 weeks? 2 Hours?  Does it really represent what the name says it does? Understanding the data requires training and learning.
  5. Set a good archiving strategy.  Sure, storage is free.  Except when it is not.  And collected data brings all kinds of expenses - security, archiving, backups, restoring, administering.
  6. Don't alert for every single event. Alert burn out is dangerous.  If you are getting too many alerts, you learn to snooze them or dismiss them without giving the serious ones the attention they deserve.  We've all done that. Don't be that guy whose phone is buzzing every 2 minutes with a new alert
  7. Change control/Configuration.  I see this one all the time.  IT staff that need change control and configuration control on customer systems seem to think they can manage their own data in a free-for-all-devil-may-care attitude.  And then they fail their customers by missing something important.  Or having bad data.  The saying that sends chills through my spine is "I only changed one simple thing, so I just promoted it into production."  And then for some weird reason ten systems went down in flames.
  8. Versioning. Trust me on this.  Your scripts are no different than other code. Some day you are going to want to roll back. Or you want to know why you are getting different data.  Or you'll want to know why the ten systems are going down in flames.
  9. Backups/Restores. Yes, the physics of data mean that sometimes hardware fails.  Or scripts. Or that "one small change" happens.  You'll want to have a backup handy for your database data, too.  It's not overkill; It's doing the right thing.
  10. Governance. All of these tips really come down to data governance for database data.  I know not everyone likes that term. You've told management you need certain rules in place for managing "real" data.  This is now your chance to dogfood your own rules.  And if tastes bad for you, it tastes bad for everyone else.  Nothing inspires IT teams to resist governance like governors who think they are above doing things they way they want everyone else to do their work.  Database management governance really just means administering your database data the same way you'd administer all your organization's other data.

 

 

So I'll leave you with this: Love your Database Data.  It's just as imporant as all the other corporate data you manage. You'll find your Answers there.  So you'll want to ensure your data is there when you need it.  The right data, at the right time.  Your boss needs to understand that.  And fund it. Trust me.

 

Questions for You

 

So what else do we need to do with metadata about databases and systems?  Do you think these are all really obvious things? Then why don't we see more shops managing their data this way?

 

*Yes, I used manage- there twice and data twice.  It's meta times two.  You are welcome.

Toys: Digital vs. Analog

Posted by Bronx Feb 24, 2014

The Old Days

I remember my father's new watch when I was a kid. It was a newfangled gold digital watch, not unlike the Pulsar pictured below.

LED.JPG

(Yes, I'm that old.) I thought this was the coolest thing ever! Every five minutes I'd ask him the time just to see it work. Fast forward forty years and I'm out of toys, sorta.

 

Why? First, let me preface my little story by informing you that I just spent my first winter here in the Utah office (which is located directly across the street from the NSA. (Booooooooo)). Winter in Utah (to my chagrin) is a rather lonely place. It seems the residents like to hibernate a bit. Having few friends here, my social mingling will have to wait until the spring. No biggie. As a geek I think it's safe to assume that most geeks are not social butterflies. That said, let's run down the list of my toys.

 

The New Days. My Toys, Winter, and Boredom.

  • Flat screen TV:  Really don't watch much TV even though I have a thousand channels. (I'm not a TV guy, but...) The Walking Dead is about it since Breaking Bad ended. I watch the news, but that gets me angry and just leads me to drink.
  • DVR: Now I can get angry on my time.
  • Chromecast: Great Gadget.
  • Computer: I was once so bored and lonely that I actually programmed an artificially intelligent friend that conversed with me.
  • Robot Vacuum. Closest thing I have to a pet.
  • Xbox: Bought the thing the moment GTA 4 came out. After hooking it up and playing for a bit, I looked at the time and was shocked to learn it was two in the morning - and on a week night! I had been playing non-stop for seven hours! The next morning I brought the system back to the store for a full refund, surmising that the paltry excuse I had for a social life would vanish lest this monstrosity were not vanquished posthaste.
  • Tablet: Nice li'l in between gadget for doing simple tasks.
  • Cell phone: Small tablet. (I'd call it a phone but no one I call ever uses it to talk (except mom)).

 

My latest toy is my new watch. It does all kinds of neat things like set itself to the atomic clock daily, tells me the date and day of the week, and a host of other nifty tricks, including telling me the time. Check it out here (and no, I did not pay that crazy price).

watch.png

Everything Old is New

I remember when my phone was analog and my watch was digital. Now that equation has been inverted. I loved video games as a kid and now I avoid them. Also loved TV as a kid, now, not so much. So with all these cool electronic toys to play with, what did I do over the cold and lonely weekend? I washed my car and read a book. Not on my tablet, a real, hard covered, honest to goodness book.

 

The Moral

Like my mother always said, "Go out an play with your friends." (or was it, "...play in traffic"?) Anyway, the point is, interact with people more and toys less - I think.

With all of the attention coming to IP address management, it is even more challenging when we look at the addition of NV (Network Virtualization) into the mix. I've had a lot of conversations with people on how IP management including IPv4 versus IPv6 will be affected by the addition of NV. The reason I link the two of them together is because while they are not the same, they are also not mutually exclusive.

 

IP address management and network allocation for Layer 3 can be an unruly beast sometimes. As I often say: Today's best practices are tomorrow's "What were we thinking?". This becomes true even more often with the quick adoption of new networking technologies. Our topologies never seem that volatile, but there are a host of different reasons why volatility gets introduced and as our datacenters are changing, so are our IP networks.

 

The Times They are a-Changin'

 

With adding Layer 2 management into the process we create even more of a challenge. Layer 2 VLANs are often confused with Layer 3 management because they tie together, but server different technical purposes. More and more we are seeing the idea of stretched  networks and the extension of Layer 2 over Layer 3 networks. With this happening, documenting and visualizing our network topology can seem difficult.

 

 

Suddenly, with stretched Layer 2 domains across our Layer 3 networks, we need have moved away from where Layer 2 was merely a logical boundary to isolate broadcast domains within a single datacenter. Now, Layer 2 is becoming the most looked at portion of our network because we are changing the way that we think of the datacenter. Server addressing is no longer required to be tied to a physical datacenter.

 

So, with NV on the cusp of becoming much more mainstream, we have to rethink the way that we manage our networks. NV takes the L2/L3 discussion even further to add policy-based management into the menu of options we use. We are at the beginning of a fast changing world in datacenter and network management.

 

I like to know how administrators are deploying and managing IP networks. I'd love to hear your thoughts on how you think you will manage your network differently in the coming year.

 

What do you think about:

 

  • IPv6 for your network
  • IP address management for dynamic cloud environments
  • Is Network Virtualization on your roadmap?
  • If you have, or will have NV in your portfolio, what's your protocol of choice?

Thanks for sharing and I look forward to hearing your comments and thoughts!

 

In the previous blog, we discussed how VLANs can help to effectively manage workstations, security, and bandwidth allocation. Managing VLANs becomes much easier for network admins when network traffic, user access, and data transfers are isolated and routed separately. Sometimes, network admins face a scenario where devices from different VLANs are communicating with each other because their shared routers have multiple IP addresses of devices within each of them. In this scenario, it's important to take advantage of VLAN management techniques that allow you to isolate traffic, increase admin control, and share resources among users. One such technique that can be used to accomplish this type of management is called Virtual Routing and Forwarding (VFR). In this blog we’ll discuss the basics of Virtual Routing and Forwarding, and how we can use this technology to easily manage Layer 3 devices in your network.

 

Introducing Virtual Routing and Forwarding (VRF)

VRF is a technology that allows multiple instances of a routing table to coexist within the same router at the same time. Without using multiple devices, VRF enables network engineers to increase the functionality of a single router by allowing the network paths to be segmented. This is done with virtualizing routing tables. When a packet enters a router, it is only forwarded using the routing table where the ingress and egress interfaces are associated with the same VRF.



Is it Easy to Manage Layer 3 devices with VRF?

Yes. In a VRF-configured router, each routing table will have a unique set of entries with their own forwarding detail, thus enabling a logical isolation of traffic. VRF requires a forwarding table with information on the next hop to push traffic through a specific device so that packets aren’t transferred outside the VRF path. There’s no need to encrypt or authenticate traffic since it’s automatically segregated. Additionally, independent routing instances allow you to use the same IP addresses for different groups without conflict.

 

What are the Challenges in Managing a VRF enabled Network?

VRF reduces the number of the devices in your network by allowing you to share the same network resources. Usually, IT infrastructure service providers implement VRF-configured routers/switches in datacenters for multiple cascading routing instances, while supporting users. ISPs use VRF to create separate VPNs for their users, enabling scalable IP MPLS VPN services. But, if the number of VRF enabled routers increase, administrators managing huge network will find it difficult to isolate and manage each of those virtual routers independently. Without a strong framework, VRF support for your network might result in unfair scheduling of network resources and increased virtualization overhead. For service providers, VRF poses significant challenge in monitoring and analyzing user data for ingress and egress traffic that has different markers from each end of a route.

 

Managing VRF with Advanced Network Monitoring Tools

Typically network admins implement VRF for DNS, DHCP, and Internet services since it allows them to use the same infrastructure for different users. You can dedicate virtual networks to specific applications/users by isolating traffic in Layer 3 devices using VRF. Advanced network monitoring tools help you to manage VRF-configured devices by providing useful information on amount of traffic, next-hop, etc., while simultaneously managing multiple routing tables. Simultaneously monitoring all the routing tables provides great insight into how VRF enabled routers are performing, making network admin’s job easier.

 

[1] Fig 1: Courtesy - Cisco®

If you’re running a Windows environment, chances are you’re using Microsoft IIS as your Web server to run your Web applications. Your IIS Web server is a critical component of your IT infrastructure. Services such as SharePoint, Outlook, and your Web presence are dependent on its availability. If it’s going to take a long time to access a Web application or website, you will likely end up leaving the site or raise a help desk ticket.


When you have different Web applications for different user groups, you should monitor the performance of the Web server because a Web server issue can cause application downtime which can impact business services. To address this issue, it’s a good idea to tune your Web servers from time to time. By doing so, you can make sure the effects are showing positive growth in application performance and availability.


Like any other Web server in the network, IIS is prone to performance issues. Monitoring IIS server is useful for improving server performance, identifying performance bottlenecks, increasing throughput, and identifying connectivity issues. To successfully achieve optimum server performance, you should consider the following best practices:

  • Application pools provide a convenient way to manage a set of websites, applications, and their parallel worker processes. You'll want to start by monitoring your app pool's memory usage. If you find that the app pool is utilizing high memory, recycle it for optimum performance.
  • Monitor the number of client connections to the worldwide web service. This will help you have a better understanding of the load on the server. As the number of client connections increases, you should consider load balancing across multiple Web servers.
  • Monitoring data downloaded or uploaded to the WWW service helps you have a better understanding of the data traffic on your server, which allows you to make informed decisions on bandwidth allocation.
  • If you detect that the IIS server is too busy when monitoring incoming sessions, start by increasing the connection count or if the server is too overloaded, load balance across multiple Web servers.
  • As with any application, you will need to monitor basic metrics, such as CPU usage, physical memory, virtual memory, I/O read/write and so on.

 

Watch this video and find out how you can improve the health of your IIS Web server.

 

As system administrators, we definitely know that we need to patch all the software and 3rd-party apps in our system’s environment. Software patching is a part of our IT role that we do frequently. For IT administration, there are typically two approaches that you normally look at patching software.Patch.png

  1. You are proactive and watch out for the latest patch updates from vendors and patch them pronto.
  2. You really don’t care what version of software you’re running in your enterprise. If you see a lot of buzz in the industry about a particular software version being insecure, your organization might go all uproarious about security and vulnerability and compliance. At that point, you suddenly realize you should have applied the patch and scurry to do a patchwork before hackers break down your firewall barriers.

I know you don’t want to be the second guy, but we do end up becoming that sometimes. And that is because we don’t comprehend the full gravity of patch management and the upshot of not doing it soon enough.

  

Why Patch?

Let’s look at some top reasons why you need to apply software patches and update applications in your IT infrastructure.

  • Your current application version is not secure anymore. There can be vulnerabilities in your software which could compromise the security of your network’s data and IT assets. That’s why the vendor has released a security fix.
  • Your organizational IT policy now states that you have to run a specific version of an application on some specific platforms. It’s time to apply patches to update those apps.
  • Compliance is a key requirement for all organizations. You don’t want to run insecure and potentially vulnerable apps and get penalized for non-compliance.
  • You simply don’t want any data breaches and security compromises and later bear the brunt of the resulting casualties including data loss, monetary loss, tarnished reputation, and the cost of repair and redemption.

 

What to Patch?

There are likely a lot of patches that you need to apply on both your servers and end-user workstations. Much of this is software that gets patch updates from their manufacturers and released online to be downloaded and installed. Patches are often required to ensure the proper functionality and security of the following items:

  1. Your operating system
  2. Your browsers
  3. Your anti-virus software
  4. Vulnerable apps with frequent exploits such as Java® and Adobe®
  5. Chat messengers and online VoIP apps

Also, any other 3rd-party apps and software that you run which gets security fixes and patch updates from vendors.

 

When to Patch?

To ensure the security of your data and IT infrastructure, it’s a good idea to apply patches as soon as the manufacturer releases them, and when you have finished testing them in your environment (so they don’t break your system or cause any unprecedented failures).

 

Patching is better done sooner than later so actively watch for patch updates. Maybe you can subscribe to some vendor alerts to receive timely patch updates. One security issue includes zero-day exploits that can compromise your system security through non-updated and vulnerable software. Another risk is malware that sneaks in onto your system via the unpatched apps and gradually starts causing damage and stealing secure data.

 

Patch before they hatch – the exploits!

 

How to Patch?

The process of patching can get complicated. For example, you might need to patch either the same or different applications running multiple systems, in multiple platforms, in distributed locations and network environments—and you can’t do it manually on a system-by-system basis. Then, what about the status of the applied patch? How do you know if it was successful or not?

Patch management is part of the larger IT administration and security strategy where organizations leverage centralized and automated patch distribution solutions to patch their system’s environment.

With the “how” of patch management, it is key to understand as much as you can about the status and success rate of patches. To efficiently manage patches, you need the capability to:

  • Automate patch management so you save a ton of time and increase operational expenditure value.
  • Benefit from bulk patch deployment so you can improve productivity by patching thousands of systems at the same time.
  • Gain access to pre-tested patch catalogues so all you need to do is deploy the patch and not worry about whether it’ll work okay and not impact any other running apps or the system itself.
  • Receive notifications and reports on patch statuses, as well as when, where, and who applied the patches and whether they were successful.
  • Discover the status of application versions and unpatched statuses by conducting a vulnerability analysis and asset discovery.

 

Patch management is not just another IT task. It’s an organizational IT mandate that has both compliance and information security implications. Choose the best patch management solution that can simplify your organization-wide patch management process and scale up to meet your growing systems and application patching needs.

 

Visit PatchZone and learn more about patch management!

Do you manage firewalls and other network devices? Does it sometimes seem that wrestling a two-headed python would be easier? Well, drop that snake. SolarWinds has the solution that will make you say, "This is so easy, I must be dreaming." Well, it is, and you're not. So, before you open that bag of Gummy Bears, check out Firewall Security Manager (FSM).

Did I Change That?

Your network is out-of-whack, maybe. You really don't know and you need to find out. A few changes here and there might make all the difference, but you can't experiment with your production gear. You need a way to tweak a few settings and see what happens without the risk of bringing the entire system down. This is where the FSM features, Impact Monitor, Change Modeling, Change Advisor really shine.

Impact Monitor helps you track changes in rule and object definitions. You can schedule monitoring, and send automatic notifications when FSM detects changes. Reports describe the changes, and tell you the effect on security and traffic flow.

 

What Happens if I Press this Red Button?

Change Modeling helps you determine the results of proposed changes to ACL, NAT, and route rules before you commit them to production. You can experiment with changes, share them with your team in an offline sandbox environment, and evaluate the security implications of the proposed changes. After you are satisfied with the proposed changes, you can generate scripts to automatically deploy the changes to production environments in a predictable, error-free way.

Change Advisor includes a method for non-technical users to enter firewall change requests. The Change Advisor Web form offers drag-and-drop entry to decrease host name and IP address errors. This eliminates fat-finger mistakes with critical parameters. Then you can automatically determine if the network already satisfies the change request. This eliminates unnecessary requests, and reduces the workload on network engineers. At your option, perform risk analysis of change requests, and enable your security/risk analyst to review security implications. To eliminate trial and error, FSM can automatically identify devices that require change, and provide network engineers with guidance regarding where the changes are required.

Don't miss the next in this series of blogs, titled "Your Name Here."

You've heard of P2P, but have you heard of V2V?

 

Vehicle-to-Vehicle (V2V) communications

 

Vehicle-to-Vehicle communications is, as the name implies, data exchanged between two vehicles. The NHTSA has announced that they are beginning to draft rules to "enable vehicle-to-vehicle (V2V)  communication technology for light vehicles," and that eventually all new vehicles will be required to have V2V communication technology.

 

V2V communication wirelessly transmits bursts of "basic safety data" ten times a second. While there's no hard definition of "basic safety data" right now, it will include the speed and position of the vehicle.

 

V2V technology will be used to help prevent accidents. The current, tested implementation uses the safety data to warn drivers if there's a car coming. For example, if you are changing lanes, you will be warned if there's a car coming in the other lane. According to DOT research, V2V technology can prevent a majority of accidents that involve two or more vehicles. Right now, there are no plans for your car to take over; your car just warns you.

 

Downsides

 

There are a number of privacy issues that are going to come into play on these future regulations, and there are some significant security issues too.

The NHTSA has said that anonymized data will be available to the public. It's fairly easy to identify individuals from such data, so people who have safety concerns about being tracked (such as those with restraining orders against other people) will need to be extra careful.

 

Vehicle manufacturers will also be able to collect additional information so long as the correct basic safety data is transmitted to other vehicles. The extra data can be used by your insurance company to determine your rates, and so on.

 

It also sounds like your vehicle will be tracked, though the information may be vaguely difficult to access due to this line: "vehicles would be identifiable through defined procedures only if there is a need to fix a safety problem."

DataArrowSmall.png

As you've been following along, I started this series with Data Is Power .In my original graphic I listed QUESTIONS, DATA, ANSWERS as the information pipeline we need to keep systems humming.  I provided a list of questions I might ask in Question Everything.  A few of you jumped in with some more great questions.

 

One comment provides the perfect segue to this week's post:

 

There is a mass of information, we try and teach our clients they do need to know everything - otherwise will just be swamped and noise. Only the essentials, this saves a lot of unnecessary data - hulattp

 

I think the key where here is know.  As a data evangelist I'm a bit biased towards making data-driven decisions.  That means collecting data even before I need it. Once an incident is underway we may not be able to understand what happened without the right data.  And that's the hard part: how do you know what data to collect before you know you will need it?

 

Types of Data Collections

 

  • Inventory data: The catalog of what resources our systems are using.  Which servers? Databases? SANs? Other Storage? Cloud resources? How long have they been around?
  • Log data: The who, when, where, how, why, to what, by what, from what data.
  • External systems conditions: What else was going on? Is it year end? Are there power outages? Was a new deployment happening? Patches? New users? All kinds of things can be happening outside our systems. What is the weather doing right now (really!)?
  • Internal conditions: What was/is resource utilization at a point in time? What is it normally? What is our baseline for those things?  What about for today/this month/this date/this day of week? What systems have the most pain points?

 

That's a lot of data.  Too much data for someone to know.  But having that raw data lets us answer some of the questions that we collected in Question Everything.

 

When we are diagnosing an issue (and batting away our Pointy-Haired Boss asking "how is it going?"), having that data is going to help.  Having historical data is going to help even more.  If production files are missing, we can replace them with a backup.  But if an automated process is deleting those files, we haven't fixed the problem. We've just entered into a whack-a-mole game with a computer.  And I know who is going to win that one.

 

So we need to find ways to make that data work for us.  But it can't do that if we aren't collecting it. And we can't do it if we rely on data about the system right now.

 

Data Timezones

 

The task we are doing also impacts the timeliness of the data we need.  There's a huge difference in what data we need depending on whether we are doing operational or remediation work. We don't just sit down and start pouring through all the data.  We need to use it to solve the problems (questions) we have right now. I think of these as time zones in the data.

 

ActivityData Time Zone
Operations (plate spinning)Now & Future
Diagnostics (firefighting, restoration)Recent and now
Strategic (process and governance)Recent and past

 

  • Keeping the plates spinning: Our normally job, running around keeping everything going like clockwork. Keeping the plates spinning so they don't break. I these cases, we want data that is looking forward.  We are looking for green across all our dashboards.  We want to know if a resource is having issues (disk space, timeouts, CPU utilization, etc.) We aren't looking back at what happened last week.  We won't actually have future data, but we can start predicting where problems are likely to pop up so that we can prioritize those activities.
  • Firefighting: Ideally, we want to know there's a fire long before the spark, but we don't always have that luxury.  We want to look at current data and recent data so that we know where to start saving systems (and sometimes even people).  We aren't here to redesign the building code or architectural practices.  We need to put out the fire and save production data. We need to get those systems plates back spinning. In database management, this might be rolling back changes, rebooting servers or restoring data.  It's fixing the problem and making safe the data. We get systems back up and running. We need data to confirm we've done that. Maybe we put in place some tactical changes to mitigate more 3 AM calls.  But we have to get up and do more plate spinning in another hour.
  • Strategic responses: We can't be firefighting all day, everyday.  Keeping those plates spinning means having time to make strategic responses. Changing how, when, where, why, and who does things.  Making improvements and keeping things going. This is where we really start mashing up the trends of the data collections.  What is causing us pain and therefore user pain? What is costing the company money? What is costing your manager money?

 

Questions for You

 

What other data time zone perspectives are there? Is there an International Date Line for data timezones?  What about a Daylight Saving Time scheme for these time zones? Do these time zones vary by job title?

 

Next week I'll talk about how these data collections and data timezones impact how we use the data and how we consume it. In other words, how we take raw data and make it powerful.

This has been a great experience to share information with all of the great Thwack community members and I've gained good insight into how folks are doing monitoring, notification and dealing with some of the day-to-day operational issues around server management.

 

What I am curious about is how much of your process for monitoring and management is wrapped into your orchestration platform or your server build process. Even more so, I would love to hear about how others have integrated operational processes like setting up backups, monitoring, patch management and other core management systems into the build process.

 

I'd love to hear about any or all of these:

 

  • What is your server build process? (Orchestrated, Image based, Manual)?
  • If you orchestrate, what platform(s) do you use? (Puppet, Chef, vCO, vCAC, SCCM/SCO, or others)?
  • Which steps have you been able to successfully automate (e.g. add to monitoring system, add to patch management)?
  • Do you have a decommission process that includes automation?

 

In my current organization we have a mixed bag of platforms, and I do as much as I can to leverage my SolarWinds monitoring, SCCM, vCO and Active Directory GPO to create clean, stateful build processes. It is a constant challenge to get folks to understand the importance of it, and I hope to see how you've had successes and challenges doing the same.

 

Thanks!

Since the inception of PCI DSS, organizations have put a number of protective mechanisms into place. As retailers, card processors and other PCI-DSS covered entities have evolved their security mechanisms – so has the hacking community. Credit card information can sell for a considerable sum in online black markets, and it has become well worth it for data thieves to evolve and fun more sophisticated strategies to breach it. But the question remains – do data security strategies continue to evolve?

 

The recent Verizon 2014 PCI Compliance Report, that talks about the state of compliance of global organizations with PCI security standards in the year 2013, discusses key gaps in PCI-DSS compliance – specifically related to attack visibility and recognition.

 

PCI-DSS Requirement 10: Track & Monitor All Access to Network Resources & Cardholder Data

This requirement covers the creation and protection of information that can be used for tracking and monitoring of access to all systems that store, process, or transmit cardholder data, including databases, network switches, firewalls, and clients. The data from the Verizon report shows that

PCI Image.png
  • Only 9.4% of organizations that their RISK team investigated after a data breach was reported were compliant with Requirement 10.
  • Only 31.7% of the audited organizations were Requirement 10 compliant.

 

These statistics demonstrate the strong relationship is between a lack of security visibility and monitoring and the likelihood of suffering a security breach. Bringing up some statistics from Verizon’s Data Breach Investigation Report 2013, 66% of breaches that happened in 2012 took months or years to even discover.

 

So, Why Aren't More Companies Compliant with Requirement 10?

Early implementations of Security information and event management (SIEM) systems – built to enhance information security through effective log management– were sophisticated, difficult to manage systems that were pushed into tightly-resourced environments to comply with Requirement 10.  Due to the overall complexity of enterprise focused SIEM solutions – many organizations found they simply didn’t have the money, time or expertise to meaningfully leverage the technology.

 

Today, enterprise SIEM remains complex, but in the past 10 years, simpler, easier to use, and more affordable products have come to market. With the prevalence of easy-to-use SIEM software solutions in the market which are purpose-built to run on their own and provide security intelligence, organizations can now cost-effectively strengthen their security and compliance strategies.

 

Log Management Benefits Reach Beyond the Scope of Requirement 10

Enterprises that implement log management in their network for logging network and system activity for PCI compliance purposes will realize that they are now more aware of suspicious behavior patterns and policy violations across their entire IT infrastructure. This helps them stay vigilant towards detecting breaches and reacting to them more quickly. Beyond the scope of PCI Requirement 10, log management will help achieve overall data protection and information security.

 

The scope and slow time to identify and resolve recent attacks such as Target, Neiman-Marcus, and Michaels not only demonstrate the importance of improving visibility, but also the constant arms race security teams and hackers are fighting. A strategic security monitoring program, coupled with proper compliance reporting and backed by a SIEM that is suited for the size and capabilities of your security team, and will enhance your overall IT security strategy and satisfy cardholder data protection norms.

 

References:

It's Valentine's Day!

 

Your excitement is palpable.

 

How about something you can really get excited about: lovable bundles of network management joy from your totally friend-zone pals at SolarWinds?

 

SolarWinds IT Bundles - Happy Together!

 

They say "it takes two, baby, to make a dream come true," so, in that spirit, let's take a look at some choice pairings we've put together for you to bring you and yours happiness and joy in the server room:


Network Bandwidth Analyzer Pack - Network Performance Monitor (NPM) and NetFlow Traffic Analyzer (NTA)

Comprehensive Network Bandwidth Analysis & Performance Monitoring

  • Detect, diagnose, and resolve network performance issues
  • Track response time, availability, and uptime of routers, switches, and other SNMP-enabled devices
  • Monitor and analyze network bandwidth performance and traffic patterns
  • Identify bandwidth hogs and see which applications are using the most bandwidth
  • Graphically display performance metrics in real time via dynamic interactive maps

For more information, including a demo and a free trial download, see Network Bandwidth Analyzer Pack at solarwinds.com.

 

IP Control Bundle - IP Address Manager (IPAM) and User Device Tracker (UDT)

Integrated IP Space Management & Endpoint Tracking

  • Enables increased control over IP address proliferation and mobile network devices
  • Provides integrated IP management, switch port monitoring, and endpoint tracking
  • Delivers centralized visibility into IP address usage and connected users and devices
  • Performs automated IP space scanning and network device discovery
  • Simplifies IP infrastructure troubleshooting and enhances network access security

For more information, including a free trial download, see IP Control Bundle at solarwinds.com.

 

Network Guardian Bundle - Firewall Security Manager (FSM) and Network Configuration Manager (NCM)

Automated Firewall Change Management & Policy Compliance

  • Detects firewall changes & reports on policy violations
  • Automatically backs up firewall configurations on a scheduled basis
  • Enables firewall change modeling, verification & deployment
  • Provides firewall configuration comparisons with rollback ability
  • Delivers detailed network security & compliance reports

For more information, including a demo and a free trial download, see Network Guardian Bundle at solarwinds.com.

 

The Network is Safe & Sound - Go Have Some Fun!

 

With these IT bundles installed, you'll know your network is up and running, safe and sound, so you spend some time today with the one you love. If you're still at a loss for things to do today to celebrate the occasion, you could probably do worse than "11 best places to take a techie on a date", but, then again, maybe not...good luck out there, anyway!

Let's return an earlier topic of how face recognition and wearable computing technology may converge.

 

Google Glass is in a phase of development they are calling the Explorer Program; participants are would-be early adopters who complete an application to buy Google Glass and provide feedback as the product iterates and its distribution expands. Meanwhile, among the many companies already developing apps for Glass, facialnetwork.com offers a demo version of NameTag, which allows a user of Glass to get information about people in the wearer's field of vision.

 

A press release on NameTag evokes a dating and professional networking best-of-both-worlds. Glass wearers, preparing to break the ice, can check each other out in a bar, for example, by running images of each other through NameTag's face recognition database and reviewing information that includes dating site, LinkedIn, and FaceBook profiles, and public records (home appraisals, drunk-driving and other convictions perhaps).

 

NameTag has the attention of Senator Al Franken's Subcommittee on Privacy, Technology, and the Law. In response to the subcommittee's inquiries, the company declares its intention to "authenticate every user and every device on our system to ensure data security". And using NameTag requires that a Glass wearer create an NT profile, supply a number of photos from specified angles, login on the service with "at least one legitimate profile" from another social media site, and agree to exchange information.

 

Glass-less people in a wearer's view are obviously at an information disadvantage. Our only option would be to visit the NameTag website and "opt-out" of having our data accessible in NameTag searches. Otherwise, by default, we remain oblivious that a begoggled NameTag user has us in view, gazing at us through the filter of our social media information, visual and textual, before making any move in our direction.

 

Yet NameTag would seem to be unambiguously excluded from the Google Glass platform: "Developers can develop these types of apps but they will not get distributed on the Glass platform period," says a representative from the team.

 

As with any consumer technology, however, first means more-to-come, and usually sooner not later; it's just a matter of time before another brand of glass arrives with fewer restrictions to better-suit all types of voyeur.

 

Edge Devices

 

What happens when a Google Glass unit enters your network space? Are you going to let it obtain an IP address? If the wearer is an employee, does your single-sign-on system admit the glass device as an authenticated endpoint through which its user can do all of his or her usual kinds of company business?

 

Monitoring the inevitable wave of glass will be a high priority as the security holes in each new innovation reveal themselves during real-world use.

Imagine that you are the systems administrator of your company and you are responsible for monitoring storage performance and capacity using SolarWinds Storage Manager. Suddenly there is a power outage at the company. Once power is restored, you find out that your database is either destroyed or corrupted. One way to avoid this issue is to perform a backup of your data. There are couple of quick ways to back up the Storage Manager database. Both will be explained below.

 

Performing a manual backup of the database:

 

1. Shutdown the mariadb service. This will also shut down all the other Storage Manager related services

 

2. Locate your database

 

  • Windows default location

 

  • Linux default location

 

3. Make a copy of the storage folder and place it in a safe location

 

4. Start the mariadb service and any other Storage Manager related services. The Storage Manager services are:

 

  • SolarWinds Storage Manager Collector
  • SolarWinds Storage Manager Event Receiver
  • SolarWinds Storage Manager Maintenance
  • SolarWinds Storage Manager Poller
  • SolarWinds Storage Manager Web Services

 

Running the DButil script:

 

Storage Manager ships with a script that will make backups of your database and even comes with the added feature of running maintenance routines to help keep the database running smoothly. This script can be automated to run regularly using Windows task scheduler in Microsoft Windows or crontab if you are running Linux. The user must make sure there is ample space in the destination folder he wishes to back up the Storage Manger database to.

 

Dbutil runs the following procedures:

 

  • Backup – Copies the database to a backup directory specified by the user
  • Maintenance – Runs database repair, analysis, and optimization procedures

 

Preparation to run dbutil in Windows:

  1. The default location of dbutil in Windows is %Program Files%\SolarWinds\Storage Manager Server\bin
  2. Using a text editor, edit your dbutil.bat file and find the following string: “Set BackupStage=C:\temp\mariadb_backup“
  3. Change the path to the directory where you would like to back up your Storage Manager database.

Note: It is recommended that you do not use a directory within the Storage Manager install directory.

Preparation to run dbutil in Linux:

  1. The default location of dbutil in Linux is /opt/Storage_Manager_Server/bin
  2. Using a text editor, edit your dbutil.sh file and find the following string: export backupStage=/opt/mariadb_backup
  3. Change the path /opt/mariadb_backup to the directory where you would like to back up your Storage Manager database.

Note: It is recommended that you do not use a directory within the Storage Manager install directory.

Usage of dbutil:

  • Windows - dbutil.bat [backup|maintenance]
  • Linux -  dbutil.sh [backup|maintenance]

More information can be found here for running dbutil.

It’s an indisputable fact that IT troubles stem from infrastructure problems, and we know they just can’t be from Mars or Venus. But that’s just most of the trouble tickets—not all of them. Help desk staff deal with all sorts of system issues and IT support requests covering system hardware, software, IT policies, domain/Internet/system access, and so on and so forth. While all these are relevant IT issues for the help desk team to attend to, there are a lot of trouble tickets that are just end-user misconceptions and paranoia. With the broad spectrum of non-IT end-users from various organizational divisions and departments, there are an increasing number of pointless and irrelevant IT tickets being raised. End-users also create service requests for so many simple and commonplace tasks that they can actually resolve on their own. They just need to realize that these types of issues don’t merit the assistance of the help desk support staff—and some are not IT issues at all.

 

For example,

Weird Non-IT Complaints

Complaints Which Are Not Really Issues

?? Microwave oven is broken.

>> Not an IT issue.

 

?? Office refrigerator is not running.

>> Not an IT issue.

 

?? Elevator is broken.

>> Not an IT issue.

 

?? Lights in the hallway are flickering.

>> Not an IT issue.

 

?? Desk cabinet jammed.

>> Not an IT issue.

 

?? Email is down.

>> No. Flip on the Wi-Fi on your laptop.

 

?? Server is down.

>> No. Flip on the Wi-Fi on your laptop.

 

?? Our website is down.

>> No. Flip on the Wi-Fi on your laptop.

 

?? The Internet is down?

>> No. Flip on the Wi-Fi on your laptop.

 

?? Wi-Fi is down.

>> No. Flip on the Wi-Fi on your laptop.

 

So, What Can Help Desk Staff and IT Admins Do?

You can set up a good knowledge base in your help desk tool. Design your end-user-facing service request form in such a way that it displays some information to the end-users when they are creating irrelevant and pointless IT issues. You can also provide self-resolution options for simple and commonplace fixes that the end-users can resolve themselves—such as password recovery, turning on Wi-Fi, restarting the system or service, etc. Add FAQs and self-resolutions tips to your knowledge base and leverage your help desk solution to simplify your IT administration and support efforts!

 

If all else fails, and end-users continue creating unrelated and absurd tickets, just ask them to call ‘Your Friendly Neighborhood Spider-Man!”

IT Spider-Man.png

datachick

Question Everything...

Posted by datachick Feb 10, 2014

QuestionsPostItSmallest.jpg

In my last post, Data is Power, I talked about the need to bring data to decision makers to support your requests for more resources. We had a good discussion there about whether or not data is enough and what sorts of experiences people have had getting management to act.

 

I said that QUESTIONS lead to DATA which lead to ANSWERS.

 

Question Everything

 

This week I want to continue the discussion about what sorts of questions we should be asking ourselves and our servers.  I think great database administrators should have the minds of  3 year olds.  Well, not exactly, but in the sense of looking around and questioning everything. What is that? When? How? Why is that?

 

The key to a collaborative environment, though, is to question without judging.  Our attitudes should be about getting to why and therefore answers, not "whom to blame".

 

These, off the top of my head:

 

General inventory

  • How many servers is your team supporting?
  • What is the ratio of databases / instances to DBA?
  • How many instances?
  • How many databases?
  • How many tables?
  • How much data?
  • How often are backups taken?
  • How often are restores tested?
  • …I'm not going to list every database object, but you get the picture.

Performance

  • What queries run the slowest?  Why is that?
  • What queries run the most often?  The least? Why is that?
  • What databases lead to the greatest pain points? Why is that?
  • What queries have been changed the most over time? Why is that?
  • What hardware configurations have had the best performance? Why is that?
  • What query authors struggle the most with writing queries that "just work"? Why is that?
  • What applications have the worst performance? Why is that?
  • What applications and databases lead to the most on-call incidents? Why is that?
  • What applications and databases lead to the most after hours on-call incidents?

Data Integrity and Quality

  • What database design patterns have worked the best? Why is that?
  • What data has cost the organization the most? Why is that?
  • What data has the most pain points? Why is that?

 

As you can see, there are lots of questions and not a lot of answers here.  We'll get to that next week.  But for now, you can sort of see a pattern in the questions, right? Why is that?


What questions do you think we should be asking in managing databases?  Don't worry about whether the answers are hard to get -- asking the question is sometimes valuable enough.

 

Then next post in this series is about Data Time ZonesAnswers in Data: So Much Data...So Little Time...Data Time Zones

A while back I wrote this article on SOAP. Why? Well, it was to prime you for the new SOAP monitor SAM has now been outfitted with. (See, if you read my articles it's quite possible I may slip in a new feature without actually announcing it, or admitting it.)

 

Yeah, there's been a lot of talk about the new AppInsight for Exchange application. That's all well and good, but the new SOAP monitor has a little flash of its own. Take a peek:

soap.png

From the SAM Admin Guide (sorta):

Loading WSDL Files: The SOAP monitor within SAM currently supports the WSDL schema, which must be exposed on a URL. Once the WSDL file is successfully loaded, the file will be parsed automagically and the fields will populate. Once the WSDL file has been successfully loaded, you can specify values for the available arguments. There are two types of arguments, simple, and complex... 

 

In other words, load a file and SAM will parse it for you, or you can enter your own XML if you're comfortable doing that. (Why do all these tech manuals sound so dull and dry? Oh, wait.)

Over the last couple of companies I've worked in, I have used a variety of different monitoring and management solutions. One of the top features that I look for in my system is the ability to effectively manage nodes during planned outages.

 

I'm dabbling with the SolarWinds Orion SDK for updating nodes to unmanage them during planned maintenance windows and I'm slowly making ground with it. Ultimately I'd like to have my whole system manageable by using RESTful URLs, or SMTP, so that I can use my patch management tool to actively disable monitoring before affecting systems.

 

What tools have you been able to use for this? I'd love to hear how anyone has used tools like Microsoft SCCM, LANDesk, or other patch deployment solutions to interact with their monitoring solution like SolarWinds Orion?

 

  • Do you actively disable monitoring during patch cycles?
  • Do you have a simple method to unmanage monitored devices such as email or a RESTful API?
  • What are the top features that you would add to your application/node monitoring tools?

Today I'm using SCOM 2007 R2 and migrating to SCOM 2012, with SolarWinds Orion, plus SCCM 2007 R2 for patch management. Has anyone gotten similar infrastructure to be more self-aware through better processes?

 

Hope to hear your solutions as there are a lot of folks looking for good alternatives to this

Adobe recently posted an announcement that urged all users to drop whatever they’re doing and update their Flash® Players—immediately. This especially applies to those who visit Google® Chrome™ and Internet Explorer®. They further punctuated this warning by mentioning a vulnerability that could allow an attacker to remotely overtake users’ computers. Their announcement included a patch that updates computers to the latest version of Flash and forestalls users from essentially handing their computers over to cyber criminals.

 

Adobe’s agility in notifying Flash users everywhere about the vulnerability likely prevented many from laying out the Welcome mat for attackers. Still, it’s important to note that this is not a one-off scenario. These vulnerabilities occur over and over—often in rapid succession, leaving users and system administrators scrambling for the latest upgrade or some sort of protection from lurking attackers.

 

Unless your IT department is fully equipped with a patch management tool, this all-too-frequent vulnerability and subsequent upgrade is not a quick fix. Envision a poor system administrator walking the halls of a multi-story business building with over 1000 computers laboriously updating Flash one workstation at a time. With that visual in mind, now imagine how many more patches the administrator has needed to install to ward off the hundreds of other vulnerabilities that occur on a regular basis.

 

The best way to stay on top of the multitude of security risks, patches, and updates, is with a tool that does all this for you—automatically. SolarWinds Patch Manager is an affordable, easy-to-use patch management tool that handles all of your 3rd-party patches on thousands of servers and workstations. It also gives you an at-a-glance view of all your patch statuses. You can see the latest available patches, top 10 missing patches in your environment, and a general-health overview of your environment based on which patches have been applied—all without leaving your workstation.

 

Security risks and IT infrastructure vulnerabilities are not going away—neither are the attackers that exploit these vulnerabilities. You need to summon reinforcements to ensure that all your servers and workstations are up-to-date and security-risk free. Patch Manager is a good ally in your efforts to avoid becoming a target for attackers.

Is there anything 3D printers can’t do? Last year, we found out about how 3-D printers may be used to print food for space travelers. Now there’s technology that enables 3-D printers to print human organs.


The California company, Organovo, has printed the world's first human livers. According to the inhabitat.com article, Organovo 3-D Prints the World's First Tiny Human Livers, the livers are tiny - a half a millimeter by four millimeters wide. Using its proprietary bio-printing technology, Organovo prints 3-D liver tissue that contains two types of liver cells. If blood vessel lining cells are added to the ink, the printed tissue contains a network of blood vessels as well. The printed liver tissue has been able to live in a petri dish for 40 days.

 

The company is planning to create a real human liver by the end of 2014. The liver won’t be viable for transplants in people. But it could be an extremely effective tool for scientific research and drug testing. 3D-printed organs could provide a much less expensive and more humane alternative to current research and testing conducted on animals.  For more information on this technology, see http://www.organovo.com.

 

The inhabitat.com article, Chinese Scientists Successfully Produce a Living Kidney Using a 3D Printer, describes how Chinese scientists at Huazhong University of Science and Technology in the eastern Zhejiang province are using 3-D printing to create tiny kidneys. The printer “ink” is made of hydrogel cells. The printer uses the hydrogel ink to print kidneys.

 

Ninety percent of the printed cells are alive. Although these organs can carry out the same functions as real human kidneys, they have no blood vessels or nerves, and consist of cells that can only live for up to four months. The printed kidneys won’t provide a permanent solution for patients, but in 10 to 15 years, when they are expected to be available for transplants, they could buy patients’ time until a human kidney becomes available.

To quickly diagnose and resolve performance bottlenecks in Exchange server, you should try to know every component within the application and how they impact performance. Continuously monitoring Exchange will get you familiarized with each metric. In turn, you will be more aware of the kind of thresholds that you can set to optimize alerts. On the other hand, Exchange performance bottlenecks can occur due to various reasons, such as a heavy server load, high rate of incoming mails, MAPI operations, POP3 requests, and so on.

 

You can start by looking at RPC load – determine if there’s high RPC per second. You should then drill down further and see if the RPC load is coming in from a single user or if it’s from a string of users. Mailbox size is an area that you should constantly monitor because it can affect server performance. Whenever there’s an increase in folder size or mailbox size, there’s an increase in CPU and disk usage which brings down server performance. Typically, when there’s a heavy load on the server or applications, it usually leads to bottlenecks which occurs in one of the following:

  • Database Capacity: Exchange bottlenecks can occur with the email database capacity.  You can set the size of the database capacity it can grow faster than you expected. Therefore, you should regularly check the email database because if it reaches capacity, then all the mailboxes in that database will have issues.
  • Disk: A bottleneck in the disk occurs when there’s high read and write latencies in the database drives and storage logs. The best way to go about diagnosing this is to look at a list of all applications that are utilizing the same disk. Since users are constantly performing a variety of operations in Exchange, it utilizes the allocated disk resource. When other applications are hogging the resource it leads to a bottleneck. To avoid this, you can start monitoring the I/O database read/write latency and I/O log read/write latency counters.
  • CPU: CPU performance can be affected when you have heavy and critical applications running side-by-side. If the CPU is over 80% or 90% then there’s definitely a bottleneck. Look at what processes are running and how much CPU they’re hogging. One way to fix this is by simply adding additional CPU to the existing capacity, or consider moving Exchange to another dedicated server with adequate resources. In addition, you should monitor the % processor time, % user time, and processor queue length counters.
  • Memory: Sufficient memory must be available to support several user logons simultaneously. If there’s insufficient memory and with several logons, a bottleneck will occur. In order to reduce logons per user, having lesser client applications per user and turning off several third-party plugins will free up memory resources. Monitoring total memory, memory cache, and memory usage % will be helpful.
  • Configuration: Setting up Exchange server with the right configuration is key. For example, you should ensure that database files, temp file, transaction logs, etc. are not shared with other resources. These are critical to Exchange performance and availability. Another tip to keep in mind is to not have scheduled maintenance during business hours. When one of these is misconfigured, end-users will be affected.
  • Hardware: Insufficient memory, storage disks, and underperforming CPU should be monitored regularly and upgraded if needed.  A hardware failure will affect the Exchange server and applications running in the server. Other applications depending on a shared resource will also be impacted when a hardware resource has contention issues.


Checking these performance counters for bottlenecks is recommended because you can evaluate performance at all times. Your monitoring software will provide a visual representation of application performance in the form of graphs, charts, or tables for specific time intervals. Receiving alerts will tell you if there’s a performance issue. Once notified, you can immediately start troubleshooting as opposed to waiting for a user to raise a help desk ticket.

1808.strip.gifRead more Mordac cartoons at: http://dilbert.com/strips/comic/2007-12-13/


   #1 Device Hardware – Fix it and Forget it

    • Don’t record or track device End-of-Life details – You don’t want to miss the surprises!
    • Let those under-performing, non-compliant devices continue to operate in the network.
    • No informed decisions or planning for budgets & device replacements –Instincts are right. Most of the time!

Feel the thrill when your device fails and you realize that it’s is out of vendor support!


   #2 Human Errors Are Normal – Deal with it!

    • Who said human errors are the No: 1 cause of network downtime?
    • Don’t spend money on automating configuration & change management tasks. What is left for the administrator to do then?
    • So, what if the network is down? It is not much of an expense to the organization.

     Manual configuration of a couple of hundreds of routers or switches is a FUN task!

 

   #3 Configuration change approvals? Gaaah…Don’t be too bothered….

    • Anybody should be able to make configuration changes. What if they go wrong? How do you expect people to learn??
    • No information on who made what change when? – Understand that, troubleshooting issues takes time!

   You’ll anyway know when the network goes down…


   #4 Regular config. backups? Too much effort!

    • Manual backup is a huge time consuming task. And, its’ boring!
    • Check for proper and regular back up of configurations every day? Naah…. Just write one if it’s that urgent!
    • It is not that big a deal to make bulk configuration changes, just ask users for more downtime!

    Don’t you have better things to do?


    #5 Compliance & regulatory standards? – You’re not the one to follow rules!

    • Don’t track compliance with federal regulations & standards. Maintaining documentation for audits is not that important. Do it if you have the time….
    • Don’t find those policy violations and fix them. Worry about them when there is a security breach.

    There isn’t anything worth stealing here…..is there?


IT failures have become an accepted, virtually expected, aspect of enterprise life. But, does that mean that you can afford to oversee the essential network configuration and change management tasks that have the biggest impact on your network’s uptime and organization’s expenses?

einsigestern

Herding Cats

Posted by einsigestern Feb 5, 2014

Do you manage firewalls and other network devices? Does it sometimes seem that herding cats would be easier? Well, put away the kibble and fret not. SolarWinds has the solution that will make you say, "This is so easy, I can't believe I'm getting paid to do it." So, before you open another Mountain Dew, check out Firewall Security Manager.

Firewall Security Manager (FSM) is an affordable firewall management product with a features that address key issues in managing, and auditing firewalls. FSM is integrated with SolarWinds Network Configuration Manager (NCM). This means you can import NCM-managed firewalls into FSM. Here is a sample of FSM functionality:

  • Automated Security Audits - 120-plus customizable checks based on standards from the NSA, NIST, SANS and others.
  • Firewall Configuration and Log Analysis - Isolate redundant, covered, and unused rules and objects.
  • Modeling - Report what effect a new rule, or change to an existing rule, will have on your firewall policy, without modifying your production devices.
  • Change Management - Simplified firewall troubleshooting for your multi-vendor, Layer 3 network devices.

Browse Your Rules and Objects

The FSM Firewall Browser enables you to view and explore security rules, NAT rules, network and service objects, and network interfaces in an easy-to-navigate user interface. You can search for specific rules, objects, and configurations. This makes identifying locations in rule sets that require changes easier than finding that sock your dog stole from the laundry hamper. You can even query firewall behavior to determine traffic flows, and hosts that are exposed to potentially dangerous or risky services.

Redundant=Bad Simplify=Good

FSM enables you to compare different versions of a firewall configuration to determine the disparities. You can compare ACL and NAT rules, network and service objects, and see how the traffic flow differs. Then compare the traffic flows to determine the rule changes responsible for the differences in policy. You can also simplify firewall rule sets and object definitions, identify redundant, covered rules, and analyze log data to determine which rules and objects are not used. Based on the analysis, you can generate scripts to clean up firewall configurations. The Security Audit Report uses security checks based on standard templates or your own customized templates to compare different versions of a firewall configuration to determine how changes to rules or objects affected security.

Is That All There Is?

Fortunately, no. Otherwise I'd have nothing else to write about. But, there is, and I do. So make your mom proud that you chose tech rather than that liberal arts degree, and check back in a week or so for the second installment "Adventures in Network Management."

superbowl-wifi.png

The the Super Bowl WiFi password gaff certainly provoked the typical round of “OMG Epic Fail!” posts on tech news sites.  Alex Jones even found a way to associate it with a conspiracy. But they're overreacting in typical fashion and 99% of IT would defend their NFL peers. It’s not an IT fail.  Rather, it’s a reminder that network engineers should always be thinking about the weakest link for security: the "wet network", aka, human beings.  Bruce Willis gave a sage answer in this context: “Sir, as an employee of this company and user of this network are you remaining security aware at all times?". “No, I am a meat popsicle.”

 

Should the TV crew in the broadcast van be expected to be thinking about network security?  Of course not, they should be thinking about getting great shots.  Good netadmins focus more on ensuring flexibility to respond to situations like these.  Resetting an SSID takes a few seconds and the issue is resolved.  Great engineers take it a step further, implementing layers of security because they know issues like this can and will occur.  They partition networks, observe traffic and audit APs.

 

So despite the breathes headlines about the NFL exposing their global network credentials to a billion people, it’s probably no big deal.  In most settings like this  marco/w3Lc0m3!HERE would be the guest SSID for tweeting and surfing the web rather than keys to the security kingdom. Although it makes great headlines, this is probably a non-event, kinda like Super Bowl XLVIII.

According to a recent Pew poll, American users are more afraid of "cyber attacks" than world-impacting threats, like nuclear weapons. Granted, you are more likely to get your credit card number stolen than someone is likely to push the big, red nuclear button of doom, but that's like being more afraid of being pick-pocketed than being beaten to a pulp and robbed when you go to a big city. Getting your wallet or credit card number stolen can be a big deal, especially the first time, but it's more on the annoyance level of threat than loss of life or limb level of threat.

 

 

What is a "cyber attack" (according to popular opinion)?

 

 

Unshockingly enough, Hollywood and mass media have a lot more to do with this fear of hackers than reality does. People seem to think that rogue (or government sponsored) hackers can ruin their lives (sort of true), bring down power grids (not likely), and start WWIII by hacking missile launch or guidance systems (thanks, Hollywood).

 

The first thing most people think of when they hear they've been hacked is usually something to do with their bank accounts or credit cards. This is common enough that financial institutions have a set of guidelines already in place to deal with unusual credit card or account activity. If you're afraid someone is going to drain your accounts, that's more difficult than you think. Since banks don't like to lose customers or money, they usually put some kind of hold on large amounts of money being transferred around. There are also federal regulations on the movement of large amounts of money.

 

Financial ruin? Unlikely.

 

Now there are other attacks that are more likely to harm individuals, though generally not physically. Facebook accounts and other social media accounts can be hacked and used to ruin people's reputation. Hackers can post personal information of others and release the full might of Internet trolls and bullies (which generally include death and other unsavory threats).  They can also upload private pictures or doctored pictures to sites and ruin someone's reputation enough that they'll be unable to get a job in their chosen field. These kinds of attacks are less reported in the media and significantly more difficult to recover from.

 

Life ruining? Possibly. Do people think of this when "cyber attack" comes up? Probably not.

 

Getting into the more dramatic, high-profile, high-damage ideas of hackers, losing infrastructure or missiles to cyber attacks is not particularly likely. Squirrels, tree limbs, and Mother Nature are more likely to cause a black out than hackers. As far as death and destruction from missiles or nuclear weapons, well, I haven't heard of any confirmed (or unconfirmed) death from a cyber attack. Cyber attacks can certainly cause damage (aka, Stuxnet), but they don't have the massive loss of life or property damage that most people seem to fear.

 

Death via hacker? Nope. Get me my self-driving car, and then we'll talk.

 

 

Why are we afraid?

 

 

There's a lot of hype around "cyber" threats that stem from popular media and ignorance. Computers have become widespread enough that everyone can relate to "cyber" dangers in movies or television, but only people in our industry seem to realize how much these fictionalized attacks are either widely exaggerated or just wrong based on current technology. News stories also get a lot wrong when reporting security breaches. It doesn't help that there are companies designed to take advantage of these fears and spread computer security misinformation to drum up more business. Few people are taught basic information security, so they make poor security choices and unreasonably fear attacks.

 

For a real-world example of how non-IT folks interpret the news based on how they think cyber attacks work, my mother is very concerned about me purchasing things online due to the Target credit card breach. I can't seem to convince her that the Target breach has nothing to do with how I shop online and that she really doesn't need to worry about that. She sees "credit cards hacked" and then associates that with online shopping when it has nothing (or at least very little) to do with online shopping, especially online shopping at non-Target stores.

 

 

Simple preventative measures

 

 

If only we could get everyone to attend a short information security class... Really, the easiest and most effective way of preventing the feared "cyber attack" is probably basic information security. Things like using strong passwords or pass phrases, cycling your password, not keeping lists of your passwords, and not clicking on strange links are the most likely steps to take to prevent security breaches.

datachick

Data is Power

Posted by datachick Feb 3, 2014

QuestionsDataAnswersSmall.jpg

You have the power right at your fingertips.

 

With access to the right data your fingers can make positive changes for your organization. Chances are high that if you manage databases, servers, and other IT services you have access to the data you need to make things better for you, your company, and your customers.

 

I talk with many people who believe they have no influence in their organization. They stare at grey and dreary cubicle walls. Endless meetings. Missed deadlines. And worst of all they have lousy, weak, corporate coffee.  They feel as if nothing they do matters.

 

But that’s not true. Every role has value. Good ideas can, and do, come from anywhere and anyone. The power to make these good ideas become real is DATA. And where does good data come from? From GOOD QUESTIONS.  Good questions require GOOD ANSWERS, and that comes from GOOD DATA.

 

Are you getting what you need?

 

When you ask for more of something from management, but management doesn't bite, do you know one reason why? Because you didn't bring the RIGHT DATA to back up your requests.

 

Requests like these:

• We need more people

• We need to “find a new home” for Steven and Brian

• We need more training / team members need more training

• We need to be involved earlier in development projects

• We need to consolidate / need more hardware / new software

• We need to virtualize

• We need to standardize more / less

• We need stronger control / more flexibility

 

What data would you want to see if you were the pointy-haired boss and got these requests? If you have tried to use data previously, did it work? If not, do you know why?  Update: Post your questions to the next post in this series.

 

All great questions, and all the more reason why data is the most important asset we have.

 

If you are feeling stuck in your cube, a gold mine is only a few mouse clicks away...data can transform your role at work or in your life.

We hear the ever present story of Shadow IT, but what is even more challenging for many sysadmin teams and network teams is what I call Shadow IP. It's the independent tracking (or failure to track) IP and network information for your environment.

 

There are many tools that have different levels of storing, displaying, tracking, and even actively managing IP addressing for networks. With the freshly minted SolarWinds IPAM beta, there are some slick features coming down the pipe. The real question though, is what are you using today, if anything?

 

I've used everything from Excel documents, custom designed web apps, and I've dabbled a bit with the Microsoft IPAM tools in 2012 but so far I've had the best adoption by my team (network and IT ops) with the SolarWinds IPAM.

 

I'd love to hear about your stories, good and bad on IP address management. Looking to hear about anything regarding:

 

  • Tracking IP usage
  • Documenting subnets and descriptive info
  • Active management
  • How do you do IPAM today?
  • What's missing from your IPAM solution?
  • What are any barriers to adopting a full IPAM product

 

Looking to see if I can improve my processes. Happy to share any learnings along the way too as I dive into more options and features of any tools that may be appropriate.

 

Any thoughts and help appreciated!

It has apparently become the inevitable phenomenon in enterprise networks – outages! Inasmuch as we do not want them occurring, they just seem to strike like lightning where we least expect and when we are unprepared – causing disruption of business services, loss of time and money, and the additional cost of repair and redemption. Even the biggest and most reputed companies across all industry sectors and geographies experience network outage from time to time, and it is the network administration and IT teams that bear the brunt of the outage aftermath.

 

2013 was no better than the years before. Let’s take a quick peek into the annals of last year and revisit some of the biggest and most disruptive network outages that impacted corporations. We’ll also learn why they happened and look at the damages incurred.

 

Company / Website

When the Outage Happened

Reason for Outage

Impact & Losses

Reference

Healthcare.gov

Sporadic outages occurring between Oct and Dec 2013

Network infrastructure failure, cloud failure, maintenance and backup errors, etc.

Users unable to use online service

http://bit.ly/1bO19Oo


Amazon Web Services

Sep 2013

Increased network error rates and latencies for the APIs. Impacted instances were unreachable via public IP addresses.

USA and Canada users of Amazon.com were not able to connect to the site for 25 minutes.

http://bit.ly/1eprOWn


Yahoo Mail

Dec 2013

Hardware outage in one of the storage systems serving 1% of Yahoo email users

Affected nearly 1 million users for about a week

http://bit.ly/JhoVe7


Google

(Google.com, YouTube, Google Drive, and Gmail)

Aug 2013

Experts say it should probably be a physical infrastructure problem given the size of the outage.

Outage lasted between 1 and 5 minutes. Cause an estimated 40% worldwide traffic drop, and costed Google around $545,000 in revenue.

http://bit.ly/1fKiyKX


NASDAQ

Aug 2013

Data processor failure due to a software bug led the backup system to crash and downing the network connectivity

Stock trading halted for three hours

http://bloom.bg/1aQRNXN


Time Warner Cable

Oct 2013

Equipment failure at network facilities in the north-eastern U.S.

New York users of Time Warner Cable service lost internet connectivity and Web access for up to 24 hours

http://huff.to/1dV66F2


Bank of America

Feb 2013

Technical issues which were not explained by experts

The bank’s online and mobile payment systems went down

http://nyti.ms/1bgfhjA


 

In addition to the above, we had so many short-term outages and interruptions in Web services including sites like Facebook and LinkedIn.

 

Certainly these were debacles that happened to even large companies and their data center networks, but that doesn’t mean you have to expect an outage any time soon in your network. Proactive network availability and performance monitoring is the only way forward to ensure you are alerted immediately when there’s network downtime, hardware failure, or any performance issue on your network devices.

 

Monitor your network round the clock and stay vigilant for those unforeseen network outages. 2014 can be saved from network outage nightmares!

Filter Blog

By date: By tag: