Skip navigation
1 2 3 4 Previous Next

Geek Speak

2,042 posts

In traditional IT, the SysAdmin’s role has been established as supporting the infrastructure in its current dynamic. Being able to consistently stand up architecture in support of new and growing applications, build test/dev environments, and ensure that these are delivered quickly, consistently, and with a level of reliability such that these applications work as designed is our measure of success. Most SysAdmins with whom I’ve interacted have performed their tasks in silos. Network does their job, servers do theirs, and of course storage has their unique responsibility. In many cases, the silos worked at cross purposes, or at minimum, with differing agenda. The rhythms of each group caused for, in many cases, the inability to deliver the agility of infrastructure that the customer required. However, in many cases, our systems did work as we’d hoped, and all the gears meshed properly to ensure our organization’s goals were accomplished.

 

In an agile, cloud native, devops world, none of these variables can be tolerated. We need to be able to provide the infrastructure to deliver the same agility to our developer community that they deliver to the applications.

 

How are the applications different than the more traditional monolithic agencies for which we’ve been providing services for so long? The key here is the concepts of containers and micro-services. I’ve spoken of these before, but in short, a container environment involves the packaging of either the entire stack or discrete portions of the app stack which is not reliant necessarily on the operating system or platform on which they sit. In this case, the x86 service is already in place, or can be delivered in generic modes on demand. The application or portions of it can be deployed as the developer creates it, and alternately, removed just as simply. Because there is so much less reliance on the virtually deployed physical infrastructure, the compute layer can be located pretty much anywhere. Your prod or dev environment on premises, your hybrid cloud provider, or even the public cloud infrastructure like Amazon. As long as the security and segmented environment has been configured properly, the functional compute layer and its location are actually somewhat irrelevant.

 

A container based environment, not exclusively different than a MicroServices based one, delivers an application as an entity, but again, rather than relying on a physical or virtual platform and its unique properties, can sit on any presented compute layer. These technologies, such as Kubernetis, Docker, Photon, Mesosphere and the like are maturing, with orchestration layers and delivery methods far more friendly to the administrator than they’ve been in the past.

 

In these cases, however, the application platform being delivered is much different than traditional large corporate apps. An old Lotus Notes implementation, for example, required many layers of infrastructure, and in many cases, these types of applications simply don’t lend themselves to a modern architecture. They’re not “Cloud Native.” This is not to disparage how Notes became relevant to many organizations. But the value of a modern architecture, the mobility of the applications, and data locales simply does not support the kinds of infrastructure that an SAP, JDEdwards, etc. type of monolithic architecture required. Of course, these particular applications are solving the cloud issues in different ways, and are still as vital to their organizations as they’d been in the past.

 

In the following 4 blog postings, I’ll address the architectural/design/implementation issues facing the SysAdmin within the new paradigm that Cloud Native brings to the table. I hope to address those questions you may have, and hope for as much interaction, clarification, and challenging conversation as I can.

At Tech Field Day 13, we gave everyone a sneak peek at some of the cool features we’ll be launching in March. If you missed the sneak peak, check out the footage below:

 

 

Our PerfStack Product Manager has also provided a detailed blog post on what you can expect from this cool feature. In the near future, look for the other Product Managers to give you more exclusive sneak peeks right here on THWACK.

 

Join the conversation

We are curious to hear more about what you think of PerfStack. After all, we used your requests to build this feature. With that said, I’ve seen several requests in THWACK expressing what you need from an Orion dashboard. I would love to hear from those of you directly in the IT trenches, specifically some ideas on how you would manipulate all this Orion data with PerfStack.

 

Personally, I would use PerfStack to visually correlate the page load times in synthetic transactions as observed by WPM with web server performance data in WPM and network latency from NetPath, and maybe storage performance from SRM to get a better understanding of what drives web server performance and what is likely to become a bottleneck that may impact end user experience if load goes up. But we want to hear from you.

 

If you had access to PerfStack right now, what would you do with it?

What interesting problems could you solve if you could take any monitoring object from any node that you are monitoring with Orion? What problems would you troubleshoot? what interesting use cases can you imagine? What problems you are facing today would it help you solve?

 

Let us know in the comments!

 

In Geek Speak(TM), THWACK(R) MVP Eric Hodeen raised the concern that networks are increasingly becoming vulnerable to outdated network devices like routers and firewalls, and even IoT devices now making their way into the workplace. (See Are Vulnerable Routers and IoT Devices the Achilles Heel of Your Network?) The reason, he writes, is that these devices are powered by lightweight and highly specialized operating systems that are not hardened and don’t always get patched when vulnerabilities are discovered. Eric then goes on to give several recommendations to defend against the risks these devices pose. In closing his article, Eric reminds us that security defenses rest on a combination of technical and procedural controls and that there is a direct correlation between configuration management and security management. He then makes the assertion that “perhaps one of the best security tools in your toolbox is your Network Configuration and Change Management software.” We couldn’t agree more. 

 

It’s for this very reason that Network Configuration Manager (NCM) continues to receive so many industry security awards. In late 2016, SolarWinds(R) NCM received two noteworthy awards for security management.

In November, SC Magazine recognized NCM as a 5-star “Best Buy” for Risk and Policy Management. In recognizing NCM, SC Magazine said:

 

“Simple to deploy, very clean to use, and SolarWinds’ decades of experience with internet-working devices—especially Cisco(R) —really shows.” 

 

  You can read the full article here

 

In December GSN recognized SolarWinds NCM as the Best Compliance/Vulnerability Assessment Solution as part of its 2016 Homeland Security Award. The GSN Awards are designed to recognize the most innovative, important technologies and strategies by U.S. and International IT and Cybersecurity companies, Physical Security companies, and Federal, State, County and Municipal Government Agencies. 

 

You can review all GSN award winners here

 

Network security and compliance is a never-ending job. Bad actors devise new hacks, new software mean new vulnerabilities, and network changes bring new security controls. Manually doing everything needed to make your network secure and compliant no longer works. This is where NCM can help you to: 

 

  • Standardize configurations based on policy
  • Immediately identify configuration changes
  • Create detailed vulnerability and compliance reports
  • Remediate vulnerabilities and policy violations

 

Using NCM is like having an experienced security engineer on staff.  So don’t just do the work of network security and compliance—automate it using NCM to get more done.  

To learn more or to try NCM for 30 days, please visit the NCM product page.

 

Do you use NCM to improve your network security and compliance?  Share your story below.

Recent news headlines report alarming intrusions into otherwise strong, well-defended networks. How did these incursions happen? Did perpetrators compromise executive laptops or critical servers? No, these highly visible endpoints are too well defended. Instead, hackers targeted low-profile, low-value network components like overlooked network routers, switches, and Internet of Things (IoT) devices.  

 

Why are network and IoT devices becoming the target of choice for skilled hackers? Two reasons. First, vendors do not engineer these devices to rigorously repel intruders. Unlike servers and desktops, the network/IoT device OS is highly specialized, which ironically may make it more difficult to secure. However, vendors do not make the effort to harden these platforms. Second, these devices are out of sight and out of mind. As a result, many of them may be approaching technical obsolescence and are no longer supported.

Many of us remember recently how the Mirai IoT botnet compromised millions of Internet-enabled DVRs, IP cameras, and other consumer devices to launch a massive distributed denial-of-service (DDoS) attack against major DNS providers to “brown out” vast regions of the internet. For many, this attack was simply an inconvenience. However, what if IoT devices or weakly defended routers and switches were compromised in a way that impacted our offices, warehouses, and storefronts? We can easily see how weak devices are targeted and compromised to disrupt commercial operations. Many companies use outdated routers, switches, and weakly secured IoT devices. So how do we protect ourselves?

 

One solution is to forbid any outside electronics into the workplace, which is my vote, though I know this is increasingly unrealistic. But “where there is a will there is a waiver” is a common response I hear. A second solution is to retire old hardware and upgrade firmware containing verified vulnerabilities. Another approach would be to design, build, and implement a separate network access scheme to accommodate IoT devices so they do not interfere with corporate network productivity. Once this network is operational, then it is the job of the corporate technology engineers and security to ensure they are used in an appropriate manner. To complement these strategies, it’s helpful to have mature change management processes, a network configuration, and a change management (NCCM) solution with EOL, vulnerability assessment, and configuration management capabilities. 

 

Fortunately, these solutions are straightforward. By using a combination of technical and procedural controls, you can alleviate much of the risk. There is a direct correlation between configuration management and security management. So in reality, one of the best security tools in your toolbox is your network configuration and change management software. Using the right tools and taking deliberate and sensible steps can go a long way to keep your company out of the headlines.

 

About the author

Eric Hodeen is a Solarwinds expert and THWACK MVP with over 20 years’ experience in network engineering with expertise in network management and operations and STIG, PCI and NIST compliance.  Eric has designed, implemented and managed networks for the Department of Defense across the US and Europe.  He earned his MS Management of Technology with a specialization in security from the University of Texas at San Antonio and holds numerous certifications including Cisco CCNA R&S, CCDA, CCNA Security, CCNP Security, Juniper JNCIA/JNCIS, ITIL V2, Security+CE and COMPTIA CASP.

By the time you read this, I will already be in Austin for Tech Field Day #13 hosted at SolarWinds. I am looking forward to attending my first ever TFD, after having virtually attended some of the previous events. I enjoy learning from the best industry experts and TFD allows for that to happen.

 

As always, here's a bunch of links I found on the Intertubz that you may find interesting. Enjoy!

 

Trump aides' use of encrypted messaging may violate records law

Leave it to our government to decide that encrypting messages somehow means they can't be recorded, despite the fact that industries such as financial services have been tracking encrypted messages for years due to SEC rules.

 

A Quarter of Firms Don’t Know if They’ve Been Breached

That number seems low. I think it's closer to 100%, because even firms that know they have been breached likely have no idea about how many breaches they have suffered.

 

Why security is best in layers

And here's the reason companies have no idea if they have been breached: they don't have the right talent in-house to discover such things. Identifying a breach takes more than just one person—it requires analysis across teams.

 

Microsoft Almost Doubled Azure Cloud Revenue Last Quarter

Articles like this remind me about the so-called "experts" out there a few years back who were dismissing the cloud as anything worthwhile. And maybe, for some workloads, it isn't. But there is one thing the cloud is right now, and that's a cash cow.

 

Monday Vision, Daily Outcomes, Friday Reflection for Remote Team Management

A must read for anyone working remotely. I've done a similar weekly report in the past, listing three things I did, three things I want to do next week, and three roadblocks.

 

Why your voice makes you cringe

Listening to me speak is awful. I don't know how anyone does it, TBH, let alone watch me.

 

Should the cloud close the front door to the database?

Why is this even a question? As a host, they MUST protect everyone, including the ill-informed customer. If the host left security up to the customer, the cloud would collapse in days.

 

Booted up in 1993, this server still runs—but not for much longer

If you replace 20% of the hardware components for a server, is it the same server as when it started? Never mind the philosophical question; this is quite a feat. And yes, I did think of Battlestar Galactica, and how the lack of upgrades proved to be the thing that saved it from destruction. I'm hoping they never turn this thing off.

 

A quick image from a flight last week—somewhere over Hoth, apparently:

hoth.jpg

By Joe Kim, SolarWinds Chief Technology Officer

 

Before last year, I bet you never gave a second thought to Alexander Hamilton. However, a popular musical has brought the United States’ first Secretary of the Treasury to center stage.

 

Hamilton had some really great quotes. Here’s one of my favorites: “Safety from external danger is the most powerful director of national conduct.”

 

Hamilton wasn’t talking about cybersecurity, but his words are nevertheless applicable to the subject. As threats multiply and gain complexity, federal IT professionals are feeling the pressure and must take measures to protect their agencies from external danger.

 

Last year, my company, SolarWinds, issued the results of a cybersecurity report and survey that ascertained the level of concern amongst federal IT administrators about growing threats. Out of 200 government IT professionals surveyed, forty-four percent mentioned threat sophistication as the number one answer to why agencies are more vulnerable today, while twenty-six percent noted the increased volume of threats as their primary concern.

 

Hamilton would tell you to take the bull by the horns. Agency IT administrators should take a cue from old Alex and adopt ways to address their concerns and fight back against threats.

 

The fight for independence… from bad actors

 

Every successful fight begins with a strategy, and strategies typically begin with budgets. As these budgets continue to tighten, agency personnel must continue to explore the most cost-effective options.

 

Software acquisition can be more efficient and budget friendly. Agencies can download specific tools at lower costs. Further, these tools are typically designed to work in heterogeneous environments. These factors can help IT managers cut through red tape while saving money.

 

The right to bear software

 

No revolution can be won without the proper tools, however. Thankfully, the tools that IT managers have at their disposal in the fight against cyber threats are numerous and powerful.

 

The primary weapon is security information and event management (SIEM) software. Automated SIEM solutions can help managers proactively identify potential threats and react to them as they occur. Agencies can monitor and log events that take place on the network—for instance, when suspicious activity is detected from a particular IP address. Administrators can react by blocking access to a user or device, or identifying and addressing policy and compliance violations.

 

These solutions have been successful in helping agencies detect and manage threats. According to our survey respondents, users of SIEM software are better able to detect, within minutes, almost all threats listed on the survey. Other tools, such as configuration management software that lets managers automatically adjust and monitor changes in network configurations, have also proven effective at reducing the time it takes to respond to IT security incidents.

 

Hamilton once said “a promise must never be broken.” The promise that federal IT managers must make today is to do everything they can to protect their networks from mounting cybersecurity threats. It’s certainly not an easy task, but with the right strategies and tools, it might very well be a winnable battle.

 

Find the full article on GovLoop.

One of the hottest topics in IT today is IoT, which usually stands for the Internet of Things. Here, however, I’d like to assign it another meaning: the internet of trolls and their tolls.

 

What do the internet of trolls and their tolls have to do with the data center and IT in particular? A lot, since we IT professionals have to deal with the mess created by end-users falling for the click-bait material at its heart. Without a doubt, the IT tolls from these internet trolls can cause real IT headaches. Think security breaches and ransomware, as well as the additional strain on people, processes, and technological resources.

 

One example of the internet of trolls and their tolls is the rise of fake online news. It’s an issue that places the onus on the end-user to discern between fact and reality, and often plays on an end-user’s emotions to trigger an action, such as clicking on a link. Again, what does this have to do with us? Social media channels like Facebook and Twitter are prominent sources of traffic on most organizations’ infrastructure services, whether it be the routers and switches, or the end-user devices that utilize those network connections and bandwidth, plus compute resources.

 

Fake news, on its own, may provide water cooler conversation starters, but throw in spearfishing and ransomware schemes, and it can have fatal consequences in the data center. Compromised data, data or intellectual property held for ransom, and disruption to IT services are all common examples of what can be done with just a single click on a fake news link by IT’s weakest link – our end-users.

 

Both forms of IoT have their basis in getting data from systems. The biggest challenges revolve around the integrity of the data and the validity of the data analysis. Data can be framed to tell any story. The question is: Are you being framed by faulty data and/or analysis when dealing with the other IoT?

 

Let me know what you think in the comment section below.

Had a wonderful time in Austin last week for the 50th Labiversary, as well as the Austin SWUG. Thank you to everyone that could attend in person and online. We are fortunate in that we get to have fun with our work, and I think you see that come through in the 50th Lab.

 

Anyway, here's a bunch of links I found on the Intertubz that you may find interesting. Enjoy!

 

The 13 Future Technology Trends That Will Shape Business And Society

Some of these, like personalized food and 3D printed houses, are worth a conversation. Such discussions about the future of tech are more fun than the reality: 3D food malware.

 

Why you should standardize your microservices

Wait, why wouldn't you standardize whatever tools and services you are using?

 

Peloton – The Self-Driving Database Management System

I've seen this promise of a self-tuning database for over 20 years. Maybe the tech finally exists to make this happen, but bad code will always bring the best hardware to its knees. Face it: you can't fix, or predict, stupid.

 

RethinkDB: why we failed

"... because the open source developer tools market is one of the worst markets one could possibly end up in." Wait a minute—you built a business around an entire ecosystem that wants and expects software to be free, and then you are shocked they wouldn't pay for your software? I guess not everyone graduates top of their MBA class.

 

DeepTraffic | 6.S094: Deep Learning for Self-Driving Cars

Because I haven't written about autonomous cars in a while, here's a simulator that lets you train the car yourself.

 

Why You Should Consider Changing Your Echo's Wake Word From Alexa

Because defaults are an easy attack vector, that's why.

 

Stolen USB Drive Leads to $2.2 Million HIPAA Breach Penalty

Articles like this remind me how far we need to go to help people understand the value of their data. Too many people are taking the most critical asset for any company (the data) and treating it carelessly. You never know the true value of data until it is gone.

 

If anyone doubts how wonderful our video team is, I'm just going to show them this image:

 

Screenshot (45).png

It’s a fact that things can go wrong in IT. But with the advent of IT monitoring and automation, the future seems a little brighter.

 

After over a decade of implementing monitoring systems, I’ve become all too familiar with what might be called monitoring grief. It involves a series of behaviors I’ve grouped into five stages.

 

While agencies often go through these stages when rolling out monitoring for the first time, they can also occur when a group or department starts to seriously implement an existing solution, or when new capabilities are added to a current monitoring suite.

 

Stage One: Monitor Everything

 

In this initial monitoring non-decision to “monitor everything,” it is assumed that all the information is good and can be “tuned up” later. Everyone is in denial that there’s about to be an alert storm.

 

Stage Two: The Prozac Moment

 

“All these things can’t possibly be going wrong!” This ignores the fact that a computer only defines “going wrong” as requested. So you ratchet things down, but “too much” is still showing red and the reaction remains the same.

 

Monitoring is catching all the stuff that’s been going up and down for weeks, months, or years, but that nobody noticed. It’s at this moment you might have to ask the system owner to calm down so they will chill out and realize that knowing about outages is the first step to avoiding them.

 

Stage Three: Painting the Roses Green

 

The next stage occurs when too many things are still showing as “down” and no amount of tweaking is making them show “up” because, ahem, they are down.

 

System owners may ask you to change alert thresholds to impossible levels or to disable alerts entirely. I can understand the pressure to adjust reporting to senior management, but let’s not defeat the purpose of monitoring, especially on critical systems.

 

What makes this stage even more embarrassing is that the work involved in adjusting alerts is often greater than the work required to actually fix the issues causing them.

 

Stage Four: An Inconvenient Truth

 

If issues are suppressed for weeks or months, they will reach a point when there’s a critical error that can’t be glossed over. At that point, everything is analyzed, checked, and restarted in real time. For a system owner who has been avoiding dealing with the real issues, there is nowhere left to run or hide.

 

Stage Five: Finding the Right Balance

 

Assuming the system owner has survived through stage four with their job intact, stage five involves trying to get it right. Agencies need to make the investment to get their alerting thresholds set correctly and vary them based on the criticality of the systems. There’s also a lot that smart tools can do to correlate alerts and reduce the number of alerts the IT team has to manage. You’ll just have to migrate some of your unreliable systems and fix the issues that are causing network or systems management problems as time and budget allow.

 

Find the full article on Federal Technology Insider.

In Austin this week for the 50th Labiversary, I hope you have the opportunity to join us for the celebration. Words cannot express just how much fun it was to put together the 50th episode. I hope the fun shows on camera when you watch.

 

Anyway, here's a bunch of links I found on the Intertubz that you may find interesting, enjoy!

 

The Yahoo you know is not changing its name

No, but it is changing the people on the board, with Marissa Meyer being the biggest name to be shown the door. In the 4+ years she has been in charge of Yahoo the only thing she made better was her own bank account (over $300 million total).

 

Yahoo's security is a huge mess

And yet Verizon is still willing to buy this company because they believe that the Yahoo brand has value. In my mind, when I hear the word Yahoo, I think "the most insecure thing on the planet, next to Bluetooth".

 

Yahoo may be dead, but Lycos still survives. Somehow.

Maybe Verizon can buy Lycos next. Or Tripod, because apparently that still exists, too.

 

Trump's cyber-guru Giuliani runs ancient 'easily hackable website'

I'm sure this is nothing to worry about, right? What's the worst that can happen?

 

3 Simple Steps To Disrupt Ransomware

Because it's always worth reminding folks that backups are the key to being able to recover and avoid ransomware from ruining your day.

 

A few nights ago, over a liberal quantity of beers, my friends and I came up wit...

A fascinating conspiracy theory that I felt compelled to share here.

 

How Do Individual Contributors Get Stuck? A Primer

Interesting collection of thoughts on how people get stuck on certain tasks, and why.

 

Adding a new #nerdshirt to my collection this week:

deadpool.jpg

For many federal IT pros, cloud computing continues to show great promise. However, others remain skeptical about transitioning to the cloud—specifically, about transitioning production databases—because of possible security risks and availability issues.

 

There are legitimate concerns when considering whether the cloud is a good choice and there are ways to prepare that can help mitigate risk. Proper preparation makes the cloud environment a more viable option and harness advantages of the cloud with fewer concerns, specifically around security.

 

Here are the top five things to consider.

 

Tip 1: Know your platform

 

There are over 60 cloud providers authorized by FedRAMP and they’re not created equal. Understand what your team needs and what the different providers offer, and select a platform that requires minimal training and oversight.

 

Tip 2: Maintain your own security

 

Although FedRAMP’s rigorous security assessment is a good starting point for helping to protect data, it’s your data, so take steps to protect it. This means encrypting, data masking, and scrubbing out any personally identifiable information.

 

Tip 3: Understand the fees

 

Cost savings is touted as one of the main benefits of moving to a cloud environment. Yet some early adopters found that the cost savings did not come right away. Others did not save money at all. There are many hidden costs when migrating to a cloud environment and it’s critical to understand and account for all costs before the project begins.

 

Consider the training costs. There are also significant expenses involved with migrating and implementing your existing applications in the cloud.

 

Moving to the cloud takes time, effort, and money. This doesn’t mean you won’t save money in the long run. There may ultimately be dramatic cost savings once systems have been migrated, but it may take several years to realize that savings.

 

Tip 4: Establish a recovery plan

 

It’s not impossible for FedRAMP service providers to go offline. Service outages are rare, and most shops are used to occasional service interruptions even when they are self-hosted. Nevertheless, make sure there is a plan B in case of an outage.

 

Similarly, make sure you know what your cloud provider will do in the event of a disaster. Can that provider help you recover lost data? That should be one of your most critical questions and one to which the provider must have an acceptable answer. Losing data is not an option.

 

Tip 5: Analyze performance and identify issues

 

End-to-end application performance monitoring is a must. If an application is running slowly, you will need to quickly find the root cause and fix it or turn it over to the cloud provider. Having data helps avoid finger pointing when something slows down.

 

There are many advantages in moving to a cloud environment. The key is due diligence. Make sure you understand every aspect of the move and embrace the opportunity. You’ll be glad you did.

 

Find the full article on Government Computer News.

If you are like me, you watch your fair share of movies and TV. And being a seasoned IT pro, I always watch with a discerning eye when IT plays a critical role in advancing plot. One of my particular not-so-favorite clichés is when the cynical computer whiz (baseball cap worn usually askew) sits down in front of a
GUI-based PC and just starts hammering away at the keyboard and all these windows flashing up and colors start to dazzle on the monitoring all the time muttering nonsensical IT jargon under his or her breath. Brilliant!

 

What once started with War Games, way back when that led to the proliferation of recreational BBS'es (which eventually gave way to the internet for the masses), Hollywood's focus on IT trailed off, with the occasional exception during the 90s. Over the past seven or eight years, Hollywood finally recognized geek chic and started using IT as the central theme in movies and TV.  I am still on the fence as to whether or not this is a good thing. Remember that discerning eye I mentioned? My teeth gnash and my nails dig in when I watch such obvious IT gaffes displayed for all to see.

 

Here is the latest example that a friend brought to my attention. The decent AMC show, Halt and Catch Fire. I can take it or leave it. To me, the series is filled with a little too much self-importance. An episode from earlier this season had a member doing something “dangerous.”

 

ftp.jpg

Had a great trip to Austin last week for the filming of the 50th episode of SolarWinds Lab. It's always great to see my team amd everyone else in the office. Being able to collaborate on ideas in person is a nice change of pace for remote workers such as myself. The only downside to last week was the fact that Austin was cold! Hey Texas, I visit you to get away from the cold, not to be reminded of it! Let's hope for warmer weather next week.

 

Anyway, here's a bunch of links I found on the Intertubz that you may find interesting. Enjoy!

 

Rumors of Cmd’s death have been greatly exaggerated

I think it is great to see Microsoft, and companies in general, finally take a stand to respond to such tactics. There is a lot of noise on the internet and in order to stand out, people will resort to "turning lies into page views" as a career choice. It's about time we all learn to recognize the trolls for what they are.

 

FTC filed a lawsuit against D-Link over failure to secure its IoT devices

Finally, we see someone take action against the manufacturers of insecure devices. Here's hoping we see similar actions taken against applications that are built insecure, too.

 

Bank robber reveals identity – by using his debit card during crime

I know, I know... if he was smart, he wouldn't be robbing a bank. But this is a special kind of dumb, IMO.

 

Copycat Hackers Are Holding More Than 1,000 Databases for Ransom

Because I thought it was time to remind you of two things: (1) don't pay the ransom and (2) don't use default security options for an internet-facing database.

 

The Real Name Fallacy

Interesting study here, revealing that people are just as apt to be jerks online even when using their real names. Oh, yes, this makes sense. See above about the folks that aren't afraid to lie and use FUD in exchange for page views.

 

MIT Researchers: 2016 Didn’t Have More Famous Deaths Than Usual

Around mid-December, I was curious about this exact thing: are there more celebrity deaths than previous years, or are more just being reported? Similar to shark attacks being perceived as "on the rise" when it was just the reporting of them that had risen.

 

The center of North America is a town called Center, and it's totally a coincidence. Really.

Funny how sometimes things just work out like this, intended or not.

 

The view from my office for much of the next few weeks:

IMG_0676.JPG

In March 2016, the U.S. Department of Defense embarked on a Cybersecurity Discipline Implementation Plan to identify specific tasks its IT personnel must perform to reinforce basic cybersecurity requirements in policies, directives, and orders.

 

The plan segments tasks into four key “lines of effort” to strengthen cybersecurity initiatives:

  1. Strong authentication
  2. Device hardening
  3. Reduce attack surface
  4. Align cybersecurity and computer network defense service providers

 

Let’s analyze the plan’s goals one at a time. “Strong authentication helps prevent unauthorized access, including wide-scale network compromise by [adversaries] impersonating privileged administrators,” reads a portion of the planning guidance. Tasks specifically focus on protecting web servers and applications through PKI user authentication.

 

The authentication effort helps ensure that an organization’s list of privileged and non-privileged users is always current and PKI verifies that unused accounts are deactivated or deleted. Account authentication is tied to named individuals and each account meets a level of access required for users’ roles. Individual privileged users’ accounts are tied to specific users, so accounts only have privileged access to network segments and applications required for assigned tasks.

 

“Ensuring devices are properly hardened increases the cost of, and complexity required for, successful exploitation attempts by the adversary,” the document states.

 

One of the first steps is to verify that each device on the network is mapped to a secure baseline configuration and that the IA team performs routine configuration validation scans. This activity, coupled with vulnerability assessment scans, makes sure that patches are applied expediently and that only permitted ports, protocols and services are operational.

 

It is essential to create a plan of action, and set milestones to track all findings. A mitigation plan, timing for each finding, and an identification of the severity of each finding are also required.

 

IT managers must seek to reduce the attack surface, eliminating internet-facing servers from the core of the Department of Defense Information Network (DODIN), while ensuring that only authorized devices can access the infrastructure.

 

Managers who oversee user access to applications or systems via commercial internet should have a migration plan to move the system or application away from the DODIN core and toward a computing environment that requires a lower level of security.

 

“Monitoring activity at the perimeter, on the DODIN and on all DOD information networks, ensures rapid identification and response to potential intrusions,” the document states. For the IT professional, this means making sure you know exactly what’s happening on the network at all times.

 

A SIEM solution will lead successful strategies here, as it provides log and event management among other benefits. Add in a network traffic analyzer—particularly one that provides the ability to perform traffic forensics—and server monitoring to understand interdependencies within and outside the network.

 

The DOD effort seeks a “persistent state of high enterprise cybersecurity readiness across the DOD environment,” the document states. This is the first phase of the agency’s security plan. With more to come, each step likely will focus on different DOD infrastructure areas. Our job? Be prepared.

 

  Find the full article on Signal.

accidentaldba.jpg

 

Each new President-Elect talks about the goals they have for their first 100 days in office. Life as a new (or accidental) DBA will be no different. Well, maybe a little different, because as a new DBA, you likely have a 90-day probationary period.

 

That’s right: a DBA has less time to show their value than the president! That means you better be prepared to hit the ground running. But don’t panic! I’ve put together this post to help you get started on the right foot.

 

What DBAs Have in Common With the President

 

DBAs have much in common with the president. First, half the people around you doubt whether you are qualified to hold your job. Second, every time you make a decision or plot a course of action, you will be criticized even by your supporters. Third, you will be judged by what you accomplish in your first one hundred days, good or bad, even if it was something not in your control.

 

Also consider that the president is subject to approval ratings. You will have your own version of this: your annual performance review. Come review time, you want your approvals ratings to be as high as possible.

 

Right about now, you're probably reading this and thinking that the being a DBA is the worst job in all of IT. Perhaps it is, but as long as you are aware of these things when you start, the role may not be as awful as it sounds.

 

Your first objective is to create an action plan. If you think you can show up, grab a slice of bacon, and ease into your new position, then you are mistaken. Your bacon can wait until after you start gathering the information you need in order to do your new job effectively.

 

Here’s a quick list of the questions you need to ask yourself:

 

  • What servers are you responsible for?
  • What applications are you expected to support?
  • What time of day are the applications used?
  • Who are your customers?
  • Are the databases being backed up properly right now?
  • How would you know if the backups were failing?

 

Even that list of basics shows how the role of a DBA can quickly become overwhelming. That is why you need to put together a checklist of the bare essentials and get started. Then you can start making short-term plans for improvements.

 

Trust me, it is easier than it sounds. You just need to be organized.

 

The Initial Checklist

 

By now, you should be sitting at your desk on what we will call Day Zero. Your initial meetings with HR are over, you have gotten a tour of the place, and you are making sure you have the access you need to get started.

 

The very first piece of information you need is a list of servers and systems you are responsible for. Without that little nugget of information, it will be difficult to make headway as you start your long, slow, journey upstream.

 

Because I like making lists and categorizing things, I have divided this initial checklist into sections. One section pertains to gathering information on what I simply call your stuff. Another section deals with finding information on your customer’s stuff. The last section is what I call your action plans. Focus your efforts on these three areas on Day Zero: find your stuff, find your customer’s stuff, and start making an action plan.

 

A sample checklist might look like this:

 

  1. Create a list of servers
  2. Check that database backups are running
  3. Spot check and verify that you can do a restore from one of those backups
  4. Build a list of customers
  5. List the most important databases
  6. List upcoming deliverables/projects
  7. Establish environmental baselines
    1. Server configuration check
    2. Instance configuration check
    3. Database configuration check
  8. Compose your recovery plan (not your backup plan, your recovery plan)

 

Notice that the checklist is missing things people will tell you are a must for DBAs to be doing daily—things like index maintenance, performance tuning, reviewing event logs, etc. Sure, all of those things are necessary, but we are still on your list of items for Day Zero. Everything I have mentioned will take you more than a few days to gather. If you get tied up troubleshooting some stored procedure on Day Zero then, you are setting yourself up for a massive failure should a disaster hit and you have not had time to document your recovery plan.

 

Would you rather be a hero for telling that developer to stop writing cursors or a hero for informing a customer that you can have their database available again in less than 30 minutes? I know which choice I would make so soon after starting a new position.

 

On Day Zero, explain to your manager that you will be gathering this inventory data first. By taking the initiative to perform due diligence, you are showing them that your first mission is to safeguard their data, your job, and their job as well. They probably won’t be able to produce the inventory for you, and they are going to want it even more than you do. You will have plenty of time later on for the other stuff. It will fall naturally into your environment baseline and subsequent action plans as you bring standards to your enterprise.

 

Let’s look at why each of the items in the checklist is important to address from Day Zero.

 

Create a List of Servers

 

Trust me that at some point, someone will walk up to you and start talking about a server you never knew existed. And they will be very confused as to why you have never heard of the server, since they work with it all the time. That there is a database there, and you are the DBA, so you should already know all of this, right?

 

Do your best to gather as much information right away about the servers you are expected to administer. That way, you will know more about what you are up against and it will help you when it comes time to formulate your action plans. These plans will be very different if you have five or five hundred instances to look after.

 

Start compiling this list by asking your immediate supervisor and go from there. The trail may take you to application managers and server administrators. For example, your boss might say that you are responsible for the payroll databases, but what are “the payroll databases”? You will need to do some detective work to track down the specific databases involved. But this detective work will pay off by deepening your knowledge and understanding of where you work.

 

If you are looking for a technical solution to finding database servers, there are a handful of ways to get the job done. The easiest is to use a 3rd party monitoring tool that discovers servers and the applications running on them. You could also use free tools like SQL Power Doc out on Codeplex.

 

You should also have a list of servers that are not your responsibility. There is a chance that vendors maintain some systems in your environment. If something goes wrong with one of those servers it is important to know who is responsible for what. And if someone tells you that you do not need to worry about a server, my advice would be to get that in writing. When disaster strikes, you had better be able to provide proof about the systems that are and are not your responsibility.

 

Check Database Backups

 

Once you identify the servers you are responsible for, the next step is to verify that the databases are being backed up properly. Do not assume that everything is working perfectly. Check that the backup files exist (both system and user databases) and check to see if there have been any recent failures.

 

You will also want to note the backup schedule for the servers and databases. You can use that information later to verify that the databases are being backed up to meet business requirements. You would not want to find out that the business is expecting a point-in-time restore ability for a database that is only being backed up once a week.

 

I cannot stress this enough, but if there is one thing you need to focus on as a DBA, it is ensuring that you can recover in the event of a disaster. Any good recovery plan starts with having a reliable database backup strategy.

 

Verify That You Can Restore

 

There is one, and only one, way for you to verify that your backups are good: you need to test that they can be restored. Focus your efforts on any group or set of databases. The real goal here is for you to become familiar with the restore process in your new shop, as well as to verify that the backups are usable.

 

Make certain you know all aspects of the recovery process for your shop before you start poking around on any system of importance. It could save you some embarrassment later, should you sound the alarm that a backup is not valid when it turns out the only thing not valid is your understanding of how things work. And these practice restores are a great way to make certain you are able to meet the RPO and RTO requirements.

 

Build a List of Customers

 

You must find the customers for each of the servers you are responsible for administering. Note that this line of inquiry can result in a very large list. With shared systems, you could find that everyone has a piece of every server!

 

The list of customers is vital information. For example, if there you need to reboot a server, it is nice to know who you need to contact in order to explain that the server will be offline for five minutes while it is rebooted. And while you compile your list of customers, it does not hurt to know who the executives are and which servers they are most dependent upon.

 

When you start listing out the customers, you should also start asking about the applications and systems those customers use, and the time of day they are being used the most. You may be surprised to find some systems that people consider relatively minor are used throughout the day while other systems that are considered most important are used only once a month.

 

List the “Most Important” Databases

 

While you gather your list of customers, go one step further and find out what their most important databases are. This could be done by either (1) asking them or (2) asking others, and then (3) comparing those lists. You will be surprised to find how many people can forget about some of their systems and need a gentle reminder about their importance. As DBAs, we recognize that some databases are more important than others, especially given any particular time of day, week, or month.

 

For example, you could have a mission critical data warehouse. Everyone in the company could tell you that this system is vital. What they cannot tell you, however, is that it is only used for three days out of the month. The database could be offline for weeks and no one would say a word.

 

That does not mean that when these systems are not used, they are not important. But if 17 different groups mention some small tiny database, and they consider the database to be of minor importance, you may consider it very important because it is touched by so many different people.

 

List Upcoming Projects and Deliverables

 

You want to minimize the number of surprises that await you. Knowing what projects are currently planned helps you understand how much time you will be asked to allocate for each one. And do keep in mind that you will be expected to maintain a level of production support, in addition to your project support and the action tasks you are about to start compiling.

 

You’ll also want to know which servers will be decommissioned in the near future so that you don’t waste time performance tuning servers that are on death row.

 

Establish Environmental Baselines

 

Baselining your environment is a necessary function that gets overlooked. The importance of having a documented starting point cannot be stressed enough. Without a starting point as a reference, it will be difficult for you to chart and report your progress over time.

 

You have already done one baseline item: you have evaluated your database backups. You know how large they are, when they are started, and how long they take. Now take the time to document the configurations of the server, the instance, and the individual databases.

 

Then you can focus on the collecting basic performance metrics: memory, CPU, disk, and network. This is where 3rd party tools shine, as they do the heavy lifting for you.

 

Compose Your Recovery Plan

 

Notice how I said recovery plan as opposed to backup plan. In your checklist thus far, you have already verified your database backups are running, started to spot check that you can restore from your backups, and have gotten an idea of your important databases. Now is the time to put all of this together in the form of a disaster recovery (DR) plan.

 

Make no mistake about it: should a disaster happen, your job is on the line. If you fail to recover because you are not prepared, then you could easily find yourself reassigned to “special projects” by the end of the week. The best way to avoid that is to practice, practice, practice. Your business should have some scheduled DR tests perhaps once a year, but you should perform your own smaller DR tests on a more frequent basis.

 

And don’t forget about recovering from past days or weeks. If your customer needs a database backup restored from two months ago make sure you know every step in the process in order to get that job done. If your company uses an offsite tape storage company, and if it takes two days to recall a tape from offsite, then you need to communicate that fact to your users ahead of time as part of your DR plans.

 

Track Your Progress

As a DBA, a lot of your work is done behind the scenes. In fact, people will often wonder what it is you do all day, since much of your work is never actually seen by the end-users. Your checklist will serve you well when you try to show people some of the tangible results that you have been delivering.

 

No matter how many people you meet and greet in the coming weeks, unless you can provide some evidence of tangible results to your manager and others, people will inevitably wonder what it is you do all day. If your initial checklist shows that you have twenty-five servers, six of which have data and logs on the C: drive, and two others had no backups at all, it is going to be easy for you to report later that your twenty-five servers now have backups running and all drives configured properly.

 

One thing I have learned in my years as a DBA: no one cares about effort, only the end result. Make certain you keep track of your progress so that the facts can help provide a way to understand exactly what you have been delivering.

 

 

If you are looking for a technical solution to finding database servers, there are a handful of ways to get the job done. The easiest is to use a 3rd party monitoring tool that discovers servers and applications running on them. You could also use free tools like SQL Power Doc out on Codeplex.

 

You should also have a list of servers you are not responsible for. There is a chance that some systems in your environment are maintained by vendors. If something goes wrong with one of those servers it is important to know who is responsible for what. And if someone tells you that you do not need to worry about a server my advice would be to get that in writing. Believe me, when disaster strikes, you had better be able to provide proof about the systems that are, and are not, your responsibility.

 

Check Database Backups

 

Once you identify the servers you are responsible for the next step is to verify that the databases are being backed up properly. Do not assume that everything is working perfectly. Check that the backup files exist (both system and user databases) and check to see if there have been any recent failures.

 

You will also want to note the backup schedule for the servers and databases. You can use that information later to verify that the databases are being backed up to meet the business requirements. You would not want to find out that the business is expecting a point-in-time restore ability for a database that is only being backed up once a week.

 

I cannot stress this enough but if there is one thing, and only one thing for you to focus on as a DBA, it would be to ensure that you can recover in the event of a disaster.

 

And any good recovery plan starts with having a reliable database backup strategy.

 

Verify that You Can Restore

 

There is one, and only one, way for you to verify that your backups are good: You need to test that they can be restored. Focus your efforts on any group or set of databases. The real goal here is for you to become familiar with the restore process in your new shop as well as to verify that the backups are usable.

 

Make certain you know all aspects of the recovery process for your shop before you start poking around on any system of importance. It could save you some embarrassment later should you sound the alarm that a backup is not valid when it turns out the only thing not valid is your understanding of how things work. And these practice restores are a great way to make certain you are able to meet the RPO and RTO requirements.

 

Build a List of Customers

 

You must find the customers for each of the servers you are responsible for administering. Note that this line of inquiry can result in a very large list. With shared systems you could find that everyone has a piece of every server!

 

The list of customers is vital information to have. For example, if there is a need to reboot a server it is nice to know who you need to contact in order to explain that the server will be offline for five minutes while it is rebooted. And while you compile your list of customers it does not hurt to know who the executives are and which servers they are most dependent upon.

 

When you start listing out the customers you should also start asking about the applications and systems those customers use, and the time of day they are being used the most. You may be surprised to find some systems that people consider to be relatively minor are used throughout the day while other systems that are considered most important are used only once a month.

 

List the “Most Important” Databases

 

While you gather your list of customers go one step further and find out what their most important databases are. This could be done by either (1) asking them or (2) asking others and then (3) comparing those lists. You will be surprised to find how many people can forget about some of their systems and need a gentle reminder about their importance. As a DBA we recognize that some databases are more important than others, especially given any particular time of day, week, or month.

 

For example, you could have a mission critical data warehouse. Everyone in the company could tell you that this system is vital. What they cannot tell you, however, is that it is only used for three days out of the month. So, the database could be offline for weeks and no one would say a word.

 

That does not mean that when these systems are not used they are not important. But if 17 different groups mention some small tiny database, and they consider the database to be of minor importance, you may consider it very important because it is touched by so many different people.

 

List Upcoming Projects and Deliverables

 

You want to minimize the number of surprises that await you; knowing what projects are currently planned helps you to understand how much time you will be asked to allocate for each one. And do keep in mind that you will be expected to maintain a level of production support in addition to your project support in addition to the action tasks you are about to start compiling.

 

You’ll also want to know which servers will be decommissioned in the near future so that you don’t waste time performance tuning servers that are on Death Row.

 

Establish Environmental Baselines

 

Baselining your environment is a necessary function that gets overlooked. The importance of having a documented starting point cannot be stressed enough. Without a starting point as a reference it will be difficult for you to chart and report upon your progress over time.

 

You have already done one baseline item; you have evaluated your database backups. You know how large they are, when they are started, and how long they take. Now take the time to document the configurations of the server, the instance, and the individual databases.

 

Then you can focus on the collecting basic performance metrics for now: memory, CPU, disk, and network. This is where 3rd party tools shine, as they do the heavy lifting for you.

 

Compose Your Recovery Plan

 

Notice how I said ‘recovery’ plan as opposed to ‘backup plan’. In your checklist so far you have already verified your database backups are running, started to spot check that you can restore from your backups, and got an idea of your important databases. Now is the time to put all of this together in the form of a disaster recovery (DR) plan.

 

Make no mistake about it: should a disaster happen then your job is on the line. If you fail to recover because you are not prepared then you could easily find yourself reassigned to “special projects” by the end of the week. The best way to avoid that is to practice, practice, practice. Your business should have some scheduled DR tests perhaps once a year but you should perform your own smaller DR tests on a more frequent basis.

 

And don’t forget about recovering from past days or weeks. If your customer needs a database backup restored from two months ago make certain you know every step in the process in order to get that job done. If your company uses an offsite tape storage company, and if it takes two days to recall a tape from offsite then you need to communicate that fact to your users ahead of time as part of your DR plans.

 

Track Your Progress

As a DBA a lot of your work is done behind the scenes. In fact, people will often wonder what it is you do all day, since much of your work is never actually seen by the end users. Your checklist will serve you well when you try to show people some of the tangible results that you have been delivering.

 

No matter how many people you meet and greet in the coming weeks, unless you can provide some evidence of tangible results to your manager and others people will inevitably wonder what it is you do all day long. If your initial checklist shows that you have twenty-five servers, six of which have data and logs on the C: drive, and two others had no backups at all it is going to be easy for you to report later that your twenty-five servers now have backups running and all drives configured properly.

 

One thing I have learned in my years as a DBA: No one cares about effort, only the end result. Make certain you keep track of your progress so that the facts can help provide a way to understand exactly what you have been delivering.

Filter Blog

By date:
By tag: