Skip navigation

In a recent post on the state of data security, I discussed how the nature of our privacy online and the security of our personal information is at serious risk and only getting worse. Now, instead of focusing on the problem, I’d like to focus on some helpful solutions we can implement at the individual, organizational, and even state level.

 

We need to start with the understanding that there’s no such thing as absolute security. All our solutions are small pieces to an overall security awareness strategy—this means there’s no silver bullet, no single vendor solution, and no magical security awareness training seminar that will solve all our problems.

 

However, when we have the combination of proper education and small implementations of both technology and culture, our overall security posture becomes more robust.

 

The first thing we need to get in our heads is that we’re typically more reactive than proactive. How often have you attended a security awareness seminar at work or implemented some sort of patch or security technology in response to a threat on the news, rather than in anticipation of future threats?

 

If we’re only ever responding to the threat of the day, we’ve already lost.

 

Individual level

First, there is little to no reason why any institution other than a bank or credit bureau needs a social security number, or really, that much personal information in general. Sometimes a service requires a home address and credit card number for shipping, but that’s where it should end. For e-commerce, it’s better to use a low-limit credit card dedicated only to online purchases rather than a card with a very high limit, or worse yet, a check card number directly attached to a checking account. In this way, there is at least a buffer between a potential thief and our actual bank account.   

 

Second, we should be using a variety of strong passwords rather than a single, easy-to-remember password for all our online logins. Personally, I believe the technology exists for passwords to be phased out eventually, but until that happens, our passwords should be complex, varied, and changed from time to time.

 

Third, we can choose browsers that don’t track our movement online, and we can opt for email services that both encrypt and honor the privacy of the content of our messages. Granted, there is certainly a trust element there, and ISPs still know what we’re doing, but remember that each small piece we add is part of the bigger picture of our overall security posture.

 

And of course, we should be using all the best practices, such as utilizing a firewall, locking our personal computers and encrypting their hard drives, keeping passwords private, and deleting old and unused online accounts (such as from MySpace or AOL).

 

Organizational level

At an organizational level—whether that be a company, service provider, municipality, etc.—the cost and complexity increases dramatically, especially when dealing with others’ personal information, Whether it's employees, customers, or members of a social community, organizations must be especially proactive to protect the data they store within their infrastructure.

 

First, a vehement adherence to security best practices must be ingrained in the culture of the executive staff and every single employee in the company. This includes the IT staff. Because internet usage is now generally very transactional, engineers need to be educated on how attackers actually hack systems and reverse engineer technology. This is security awareness training for IT.

 

Second, companies must encrypt data both at rest and in motion on the backend. Yes, it’s more work and money, but this alone will mitigate the risk of data misuse in the event of a data loss. This involves encrypting server hard drives and using only encrypted channels for data in motion. This can become very cumbersome with regard to east-west traffic within a data center itself, but the principle should be applied where it can be.

 

Third, organizations storing others’ personal information should consider decentralizing data as much as possible. This is also expensive because it requires the infrastructure and culture shift within an IT department used to centralizing and clustering resources as much as possible. Small and medium-sized businesses are especially vulnerable because attackers know they are easier targets, so they especially need to make the educational and cultural changes to protect data.

 

In a recent article, I discussed the top 10 network security best practices organizations should stick to. This includes keeping up with current patches, making use of good endpoint protection, using centralized authentication, using a decent monitoring and logging solution, staying on top of end-user training, and preventing or limiting the use of personal devices on the corporate network. These are ways to prevent data leaking and an outright breach.

 

Government oversight

Our municipalities and larger government entities should be following these principals for their internal infrastructures as well, but how does government oversight factor into our overall security posture?

 

Government regulations for financial institutions already exist, but what about other industries such as e-commerce, social media providers, our private employers, etc.? This is extremely difficult because laws differ from state to state and country to country, so how can government oversight help protect our personal information online?

 

This is a debatable topic because it involves the question of how much involvement government should have in the private sector and in our private lives. However, there are some things that governments can do that don’t impede on privacy but help to ensure security.

 

First, there should be legislation governing the securing of third-party data. Data is the new oil, but it’s also a major liability. So just as it’s illegal for someone to steal a valuable widget off a store shelf, there should be explicit legislation and subsequent consequences for the theft or mishandling of our information. This is difficult, because the bad guys are the thieves, not always the company that was breached.

 

This means there need to be better methods to track stolen data after a breach in order to capture and penalize the attacker. However, without tracking data after a breach, the entity left holding the bag is the organization that suffered the data breach and the individual whose information was stolen.

 

We need to determine where the responsibility lies. Is it all on the end-user? Is it all on companies? Is it by government legislation? The reality is that it’s a decentralized responsibility in the sense that companies and governments store our information and are therefore responsible for keeping it safe. In most cases, though, we’ve also chosen to share that information, so we bear responsibility as well.

 

To some extent, governments can regulate the manner in which third-party data is stored. This would increase overall security posture and penalize organizations that mishandle our private information. Ultimately, the criminal is the thief, but this way, the organizations that handle our data would have the incentive to improve their security posture as well.

 

In conclusion

We need to remember that there’s no such thing as absolute security. The entire security paradigm in our society must change. Rather than being reactive and relying on stopgap measures, security awareness today must begin in the elementary school classroom and continue into the boardroom. Our solutions are small pieces to an overall security awareness strategy, and this is okay. And if done proactively and dutifully, they will increase our overall security posture and decrease the risk to our information online.

 

Here I outlined only a few pieces to this puzzle, so I’d love to hear your thoughts and additional suggestions in the comments.

Without a doubt, we're at a tipping point when it comes to security and the Internet of Things (IoT). Recently, security flaws have been exposed in consumer products, including children's toys, baby monitors, cars, and pacemakers. In late October 2016, Dyn®, an internet infrastructure vendor, suffered a malicious DDoS attack that was achieved by leveraging malware on IoT devices such as webcams and printers that were connected to the Dyn network.

 

No, IoT security concerns are not new. In fact, any device that's connected to a network represents an opportunity for malicious activity. But what is new is the exponential rate at which consumer-grade IoT devices are now being connected to corporate networks, and doing so (for the most part) without IT's knowledge. This trend is both astounding and alarming: if your end user is now empowered to bring in and deploy devices at their convenience, your IT department is left with an unprecedented security blind spot. How can you defend against something you don't know is a vulnerability?

 

BYOD 2.0


Right now, most of you are more than likely experiencing a flashback to the early days of Bring Your Own Device (BYOD) - when new devices were popping up on the network left and right faster than IT could regulate. For all intents and purposes, IoT can and should be considered BYOD 2.0. The frequency with which IoT devices are being connected to secured, corporate networks is accelerating dramatically, spurred on in large part by businesses' growing interest in leveraging data and insights collected from IoT devices, combined with vendors' efforts to significantly simplify the deployment process.

 

Whatever the reason, the proliferation of unprotected, largely unknown and unmonitored devices on the network poses several problems for the IT professionals tasked with managing networks and ensuring the organizational security.

 

The Challenges


First, there are cracks in the technology foundation upon which these little IoT devices are built. The devices themselves are inexpensive and the engineering that goes into them is more focused on a lightweight consumer experience as opposed to an enterprise use case that necessitates legitimate security. As a result, these devices re-introduce new vulnerabilities that can be leveraged against your organization, whether it's a new attack vector or an existing one that's increased in size.

 

Similarly, many consumer-grade devices aren't built to auto-update, and so the security patch process is lacking, creating yet another hole in your organization's security posture. In some cases, properly configured enterprise networks can identify unapproved devices being connected to the network (such as an employee attaching a home Wi-Fi router), shut down the port, and eradicate the potential security vulnerability. However, this type of network access control (NAC) usually requires a specialized security team to manage and is often seen only in large network environments. For the average network administrator, this means it is of premier importance that you have a fundamental understanding of and visibility into what's on your network - and what it's talking to - at all times.

It's also worth noting that just because your organization may own a device and consider it secure does not mean the external data repository is secure. Fundamentally, IoT boils down to a device inside your private network that is communicating some type of information out to a cloud-based service. When you don't recognize a connected device on your network and you're unsure where it's transmitting data, that's a problem.

 

Creating a Strategy and Staying Ahead


Gartner® estimates that there will be 21 billion endpoints in use by 2020. This is an anxiety-inducing number, and it may seem like the industry is moving too quickly for organizations to slow down and implement an effective IoT strategy.

 

Still, it's imperative that your organization does so, and sooner rather than later. Here are several best practices you can use to create an initial response to rampant IoT connections on your corporate network:

  • Create a vetting and management policy: Security oversight starts with policy. Developing a policy that lays out guidelines for IoT device integration and connection to your network will not only help streamline your management and oversight process today, but also in the future. Consider questions like, "Does my organization want to permit these types of devices on the corporate network?" If so, "What's the vetting process, and what management processes do they need to be compatible with?" "Are there any known vulnerabilities associated with the device and how are these vulnerabilities best remediated or mitigated?" The answers to these questions will form the foundation of all future security controls and processes.
    If you choose to allow devices to be added in the future, this policy will ideally also include guidelines around various network segments that should/should not be used to connect devices that may invite a security breach. For example, any devices that request connection to segments that include highly secured data or support highly critical business processes should be in accordance with the governance policy for each segment, or not allowed to connect. This security policy should include next steps that go beyond simply "unplugging" and that are written down and available for all IT employees to access. Security is and will always be about implementing and verifying policies.
  • Find your visibility baseline: Using a set of comprehensive network management and monitoring tools, you should work across the IT department to itemize everything currently connected to your wireless network and if it belongs or is potentially a threat. IT professionals should also look to leverage tools that provide a view into who and what is connected to your network, and when and where they are connected. These tools also offer administrators an overview of which ports are in-use and which are not, allowing you to keep unused ports closed against potential security threats and avoid covertly added devices.
    As part of this exercise, you should look to create a supplemental set of whitelists - lists of approved machines for your network that will help your team more easily and quickly identify when something out of the ordinary may have been added, as well as surface any existing unknown devices your team may need to vet and disconnect immediately.
  • Establish a "Who's Responsible?" list: It sounds like a no-brainer, but this is a critical element of an IoT management strategy. Having a go-to list of who specifically is responsible for any one device in the event there is a data breach will help speed time to resolution and reduce the risk of a substantial loss. Each owner should also be responsible for understanding their device's reported vulnerabilities and ensuring subsequent security patches are made on a regular basis.
  • Maintain awareness: The best way to stay ahead of the IoT explosion is to consume updates about everything. For network administrators, you should be monitoring for vulnerabilities and implementing patches at least once a week. For security administrators, you should be doing this multiple times a day. Your organization should also consider integrating regular audits to ensure all policy-mandated security controls and processes are operational as specified and directed. At the same time, your IT department should look to host some type of security seminar for end-users where you're able to review what is allowed to be connected to your corporate network and, more importantly, what's not allowed, in order to help ensure the safety of personal and enterprise data.

 

Final Thoughts

 

IoT is here to stay. If you're not already, you will soon be required to manage more and more network-connected devices, resulting in security issues and a monumental challenge in storing, managing, and analyzing mountains of data. The risk to your business will likely only increase the longer you work without a defined management strategy in place. Remember, with most IoT vendors more concerned about speed to market than security, the management burden falls to you as the IT professional to ensure both your organization and end-users' data is protected. Leveraging the best practices identified above can help you begin ensuring your organization is getting the most out of IoT without worrying (too much) about the potential risks.


This is a cross-post of IoT and Health Check on Sys-Con.

sqlrockstar

The Need for Speed

Posted by sqlrockstar Employee Jul 27, 2017

 

I’ve got a quick quiz for you today. Which scenario is worse?

 

Scenario 1:

SQL Statement 1 executes 1,000 times, making end-users wait 10 minutes. 99% of the wait time for SQL Statement 1 is “PAGEIOLATCH_EX.”

 

Scenario 2:

SQL statement 2 executes one time, also making the end-users wait 10 minutes. 99% of the wait time for SQL Statement 2 is “LCK_M_X.”

 

The answer is that both are equally bad, because they made another user wait 10 minutes. It doesn’t matter to the end-user if the root cause is disk, memory, CPU, network, or locking/blocking. They only care that they have to wait ten minutes.

 

The end-users will pressure you to tune the queries to make it faster. They want speed. You will then, in turn, try to tune these queries to reduce their run duration. Speed and time become the measuring sticks for success.

 

Many data professionals put their focus on run duration. But by focusing only on run duration, they overlook the concept of throughput.

 

Time for another quiz: Which scenario is better?

 

Scenario 3:

You can tune SQL Statement 3 to execute 1,000 times, and run for 30 seconds.

 

Scenario 4:

You can tune SQL Statement 4 to execute 10,000 times, and run for 35 seconds.

 

The extra five seconds of wait time is a tradeoff for being able to handle 10x the load. I know I’d rather have 10,000 happy users than 9,000 unhappy ones. The trouble here is that being able to tune for throughput can take a lot more effort than tuning for duration alone. It’s more important to get up and running than it is to design for efficiency.

 

Once upon a time, we designed systems to be efficient. This, of course, led to a tremendous number of billable hours as we updated systems to avoid the Y2K apocalypse. But today, efficiency seems to be a lost art. Throwing hardware at the problem gets easier with each passing day. For cloud-first systems, efficiency is an afterthought because cloud makes it easy to scale up and down as needed.

 

Building and maintaining efficient applications that focus on speed as well as throughput requires a lot of discipline. Here’s a list of three things you can do, starting today, for new and existing database queries.

 

Examine current logical I/O utilization

To write queries for scale, focus on logical I/O. The more logical I/O needed to satisfy a request, the longer it takes to run and the less throughput you will have available. One of the largest culprits for extra logical I/O is the use of incorrect datatypes. I put together a couple of scripts a while back to help you with this. One script will look inside a SQL Server® database for integer values that may need to have their datatypes adjusted. Or you can run this script to check the datatypes currently residing in memory, as those datatypes are likely to be the ones you should focus on adjusting first. In either case, you are being proactive in the measuring and monitoring of the data matching the defined datatype.

 

Take a good look at what is in your pipe

After you have spent time optimizing your queries for logical I/O, the last thing you want is to find that you have little bandwidth available for data traffic because everyone in the office is playing Pokemon®. We started this post talking about speed, then throughput, and now we are talking about capacity. You need to know what your maximum capacity is for all traffic, how much of that traffic is dedicated to your database queries, and how fast those queries are returning data to the end-users.

 

Test for scalability

You can use tools such as HammerDB and Visual Studio to create load tests. After you have optimized your query, see how it will run when executed by simultaneous users. I like to take a typical workload and try running 10x, 25x, 50x, and 100x test loads and see where the bottlenecks happen. It is important to understand that your testing will not likely simulate production network bandwidth, so keep that metric in mind during your tests. You don’t want to get your code to be 100x capable only to find you don’t have the bandwidth.

 

Summary

When it comes to performance tuning your database queries, your initial focus will be on speed. In addition to speed, it is necessary to consider throughput and capacity as important success metrics for tuning your queries. By focusing on logical I/O, testing for scalability, and measuring your network bandwidth, you will be able to maximize your resources. 

If you love technology and enjoy learning, then working in IT without losing your mind will be a breeze. In my past 20 years of working in IT, I've personally found that if you are willing to keep learning, this will lead you down a very interesting journey—and one of great success.

 

How to choose what to learn

 

In IT, there is so much to choose from that it can leave your head spinning if you don’t know where to focus. I recently became part of a mentoring program, and my mentee struggled with this very problem. She is committed to virtualization, but the emergence of cloud left her feeling confused. Every virtualization provider is moving to some form of cloud, but in my experience, not all virtualization platforms in the cloud are created equal. She's also not from the U.S., which is another factor—some technologies are just more geographically adopted in some regions on the world than others. So how did we decide what she will learn next? We knew for certain it would be a cloud-related technology. That being said, there was much to consider.  So, we talked through some key questions, which I would also recommend you consider.

 

  • Which cloud providers meet the security requirements of your region in the world? Yes, this may take some research, but remember: you are looking to learn a new technology that will help you advance your career. This requires understanding which cloud providers are being adopted successfully in your area. The choice you make here should align with industry trends, as well as what will be of most value to your current employer and any potential future employers.

 

  • Is there a way for you to try the cloud providers offering? There is nothing worse than investing too much time into learning something that you ultimately may not enjoy, or don’t believe will meet your customers'/employer’s needs. Get your hands on the technology and spend a few hours with it before committing to learning it fully. If you enjoy it, then take the gloves off and get your hands dirty learning it.

 

  • Certification? If you see value in the technology and are enjoying learning it, then look for a certification track. I personally do not believe that certification is necessary for all things, but if you are passionate about the cloud provider's offering, using certification to learn the product will go along way—especially if you don’t yet have the real-world experience with the technology. Certification opens doors, and being able to put the certification down on the resume can help you get started using it in the real world.

 

So, take some time to answer these questions before you dive into what you will learn next in IT. Answering these key questions, regardless of your technical interests, will bring you one step closer to deciding what to learn next without losing your mind.

 

Technology changes fast

 

The pace of technology changes fast, and having a strategic approach to your technical one will keep your mind intact. Embrace, love, and learn more about technology and IT. It’s a great ride!

aguidry

Sysadmania! Is Upon Us!

Posted by aguidry Employee Jul 27, 2017

SysAdmin Day is upon us, you SysAdmaniacs!

 

It’s your day! We know how you may feel underappreciated at times throughout the year, so we went all out and created a board game in your honor. That’s right. Now, after another annoying day resetting passwords and resolving bottlenecks caused by I/O-heavy apps, you can unwind with a couple of friends and SysadMANIA!

 

Here's a little sneak peak of what you can expect (well something close to it):

 

 

Our game designers created SysadMANIA! with you in mind. We know what it feels like to be ignored when everything works and blamed when something breaks. We hear you! So we took your pain and turned it into hilarious fun. Head Geeks Patrick Hubbard and Leon Adato took it for a test run (tough job, huh?) and loved the experience. They agreed that the game provides a laugh-out-loud, relatable, therapeutic, thoughtful, super fun time.

 

patrick.hubbard said, “When a game uses socialization and chance cards to escape the game boss, you realize that something very special is happening. It’s Cards Against IT meets D&D meets Sorry! meets Battleship… and I laughed every turn. I laughed because the work cards are funny and Brad is truly terrible, but also because it’s cathartic. If you’ve ever suffered the indignity of having a desk in the basement, falling off a ladder hanging an AP, or failing a backup recovery, you’ll laugh even harder.”

 

 

It taps into our humanity and celebrates the glory and triumph of IT over the frustration of tedium and stupid user questions. I can’t wait for the first expansion pack! Please let it be cloud.”

 

adatole loves it, too. He says, “Since my family observes a full day, every single week, of no electronics, we play a LOT of games. On a typical Saturday, you’ll find us around the table playing all kinds of games: Monopoly, Gin Rummy, Ticket to Ride, King of Tokyo, even Munchkins, or Exploding Kittens.

 

"When the Geeks sat down and unboxed SysadMANIA! for the first time, I figured it was a one-off joke. A game about life in IT. How quaint. But the truth is that there’s a GAME here. The combination of different personas displaying various strengths and weaknesses affect your ability to get through work challenges.

 

"What I liked best was that SysadMANIA! can be run as your typical ‘every player for themselves’ game, but happens to be more interesting – and truer to the spirit of IT pros everywhere – when played cooperatively à la Forbidden Island.”

 

Check out this video, as patrick.hubbard, kong.yang, chrispaap, and stevenwhunt battle through SysadMANIA:

 

 

So, there you have it. Glowing praise for SysadMANIA! We hope the day-long showing of respect and admiration by your colleagues is just as bright. Oh and if you want to join in on the SysadMANIA fun, visit: http://go.solarwinds.com/sysadmania for your chance to win some cool prizes, or even the actual board game!

certified.jpg

Recently, Head Geek Destiny Bertucci ( Dez ) and I talked about certifications on an episode of SolarWinds Lab. For almost an hour we dug into the whys and hows of certifications. But, of course, the topic is too big to cover in just one episode.

 

Which is why I wanted to dig in a little deeper today. This conversation is one that you can expect I'll be coming back to at various points through the year. This dialogue will be informed by my experiences both past and present, as well as the feedback you provide as we go on. I want this to be a roundtable discussion, so at the end we'll all have something closer to a 360-degree view. My goal is to help IT professionals of all experience levels make an informed choice about certs: which ones to pursue, how to go about studying, where to set expectations about the benefits of certifying, and even tricks for preparing for and taking the exams.

 

For today's installment, I thought it might make sense to start at the beginning, meaning a bit of a walk down Certification Lane to look at the certs I already have, when I got them, and why.

 

To be clear, I don't mean this to be a #humblebrag in any way. Let's face it. If you watched the episode, you know that there are other Geeks with WAY more certifications than me. My point in recounting this is to offer a window into my decision-making process and, as I said, to get the conversation started.

 

My first tech certification was required by my boss. I was working at a training company that specialized (as many did at the time) in helping people move from the typing pool where they used sturdy IBM selectrics to the data processing center where WordPerfect was king. My boss advised me that getting my WPCE (WordPerfect Certified Resource) cert would accomplish two things:

 

  1. it would establish my credibility as a trainer
  2. if I didn't know a feature before the test, I sure as heck would after.

 

This was not your typical certification test. WordPerfect shipped you out a disk (A 5.25" floppy, no less) and the test was on it. You had up to 80 hours to complete it and it was 100% open book. That's right, you could use any resources you had to finish the test. Because at the end of the day, the test measured execution. Instead of just asking "what 3-keystroke combination takes you to the bottom of the document" the exam would open a document and ask that you DO it. A keylogger ensured the proper keystrokes were performed.

 

(For those who are scratching their heads, it's "Home-Home-DownArrow", by the way. I can also still perfectly recall the 4-color F-key template that was nearly ubiquitous at the time.

 

WordPerfect 4.2 - keyboard template - top.jpg

 

And my boss was right. I knew precious little about things like macros before I cracked open the seal on that exam disk. But I sure knew a ton about them (and much more) when I mailed it back in. Looking back, the WPCE was like a kinder, gentler version of the CCIE practical exam. And I'm grateful that was my first foray into the world of IT certs.

 

My second certification didn't come until 7 years later. By that time I had worked my way up the IT food chain, from classroom instructor to desktop support, but I wanted to break into server administration. The manager of that department was open to the idea, but needed some proof that I had the aptitude. The company was willing to pay for the classes and the exams, so I began a months-long journey into the world of Novell networking.

 

At the time, I had my own ideas about how to do things (ah, life in your 20's when you are omniscient!). I decided I would take ALL the classes and once I had a complete overview of Novell, I'd start taking exams.

 

A year later, the classes were a distant dot in the rear view mirror of life but I still hadn't screwed up my courage to start taking the test. What I did have, however, was a lot more experience with servers (by then the desktop support was asked to do rotations in the helpdesk, where we administered almost everything anyway). In the end, I spend many, many nights after work and late into the night reviewing the class books and ended up taking the tests almost 18 months after the classes.

 

I ended up passing, but I also discovered the horrific nightmare landscape that is "adaptive exams" - tests that give you a medium level question on a topic and if you pass it, you get a harder question. This continues until you miss a question, at which point the level of difficulty drops down. And that pattern continues until you complete all the questions for that topic. On a multi-topic exam like the Certified Novell Engineer track, that means several categories of questions that come at you like a game of whack-a-mole where the mole's are armed and trying to whack you back. And the exam ends NOT when you answer all the questions, but when it is mathematically impossible to fail (or pass). Which led to a heart-stopping moment on question 46 (out of 90) when the test abruptly stopped and said "Please wait for results".

 

But it turns out I had passed.

 

Of course, I was prepared for this on the second test. Which is why the fact that it WASN'T adaptive caused yet more heart palpitations. On question 46 I waited for the message. Nothing. So I figured I had a few more questions to answer. Question 50 passed me by and I started to sweat. By question 60 I was in panic mode. At question 77 (out of 77), I was on the verge of tears.

 

But it turns out I passed that one, as well.

 

And 2 more exams later (where I knew to ASK the testing center what kind of test it would be before sitting down) I was the owner of a shiny new CNE (4.0, no less!).

 

And, as things often turn out, I changed jobs about 3 months later. It turns out that in addition to showing aptitude, the manager also needed an open req. My option was to wait for someone on the team to leave, or take a job which fell out of the sky. A local headhunter cold-called my house and the job he had was for a server administration job at a significant amount more than what I was making.

 

It also involved Windows servers.

 

By this time I'd been using Windows since it came for free on 12 5.25" floppies with Excel 1.0. For a large part of my career, "NT" was short for "Not There (yet)". But in 1998 when I switched jobs, NT 4.0 had been out for a while and proven itself a capable alternative.

 

Which is why, in 1999, I found myself NOT as chief engineer of the moon as it traveled through space but instead spending a few months of my evening hours studying for and taking the 5 exams that made up the MCSE along with the rest of my small team of server admins.

 

Getting our MCSE wasn't required, but the company once again offered to pay for both the class and the exam as a perk of the job (ah, those pre-bubble glory days!) so we all took advantage of it. This time I wasn't taking the test because I was told to, or to meet someone else's standard. I was doing it purely for me. It felt different, and not in a bad way.

 

By that point, taking tests had become old hat. I hadn't passed every single one, but my batting average was good enough that I was comfortable when I sat down and clicked "begin exam".

 

Ironically, it would be another 5 years before I needed to take a certification test.

 

In 2004, I was part of a company that was renewing their Cisco Gold Partner status, when the powers-that-be discovered they needed a few more certified employees. They asked for volunteers and I readily raised my hand, figuring this would be the same deal as the last time - night study for a few weeks, take a test, and everybody is happy.

 

It turns out that my company needed 5 certifications - CCNA (1 exam), MCSE (6 exams), MCSE+Messaging (add one more exam to the 6 for MCSE), Cisco Unity (1 exam), and Cisco Interactive Voice Response (1 exam). Oh, and they needed it by the end of the quarter. "I'm good," I told them, "but I'm not THAT good".

 

After a little digging, I discovered a unique option: Go away to a 3 week "boot camp" where they would cover all the MCSE material *and* administer the exams. Go straight from that boot camp to a 1 week boot camp for the CCNA. Then come home and finish up on my own.

 

It is a testament to my wife's strength of character that not only did she not kill me outright for the idea but supported the idea. And so off I went.

 

The weeks passed in a blur of training material, independent study, exams passed, exams failed, and the ticking of the clock. And then it was home and back to the "regular" work day, but with the added pressure of passing two more exams on my own. In the end, it was the IVR exam (of all things) that gave me the most trouble. After two stupendously failed attempts, I passed.

 

Looking back, I know it was all a very paper tiger-y thing to do. A lot of the material - like the MCSE - were things I knew well and used daily. But some (like the IVR) were technologies I had never used and never really intended to use. But that wasn't the point and I wasn't planning to go out and promote those certifications in any case.

 

But taking all those tests in such short order was also - and please don't judge me for this - fun. As much as some people experience test anxiety, but the rush of adrenaline and the sense of accomplishment at the end is hard to beat. In the end I found the whole experience rewarding.

 

And that, believe it or not, was the end of my testing adventure (well, if you don't count my SCP, but that's a post for another day) - at least it WAS it until this year when Destiny and I double-dog-dared each other to go on this certification marathon.

 

This time out, I think I'm able to merge the best of all those experiences. It is a lot of tests in a short period, but I'm only taking exams that prove the skills I've built up over my 30 year career. I'm not doing it to get a promotion or satisfy my boss or meet a deadline. It's all for me this time.

 

And it's also refreshingly simple. The idea that there is ONE correct answer to every question is a wonderful fiction, when compared to the average day of an IT professional.

 

So that's where things stand right now. Tell me where you are in your own certification journey in the comments below. Also let me know if there are topics or areas of the certification process that you want me to explore deeper in future posts.

This week's Actuator comes to you direct from a very quiet house because we have sent the spawn off to an overnight camp for two weeks. The hardest part for us will be doing all the chores that we've trained the kids to do over the years. It's like losing two employees!

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Avanti Breach – Does This Signify IoT Attacks Have Become Mainstream?

Yes. And, since you had to ask, I suggest you pay attention to all the other IoT hacks in the news.

 

Hacker steals $30M worth of Ethereum by abusing Parity wallet flaw

Good thing these banks have insurance to protect against loss… What's that you say? Oh.

 

A Hacker's Dream: American Password Reuse Runs Rampant

It’s time to get rid of passwords, and we all know it. Unfortunately, security isn’t a priority yet, so systems will continue to rely on antiquated authentication methods.

 

IoT Security Cameras Have a Major Security Flaw

Like I said, security is not a priority for many.

 

How Microsoft brought SQL Server to Linux

Somewhere, Leon Adato is weeping.

 

New IBM Mainframe Encrypts All the Things

This sounds wonderful until you remember that most breaches happen because someone left their workstation unlocked, or a USB key on a bus, etc.

 

Sweden leaked every car owners' details last year, then tried to hush it up

There is no security breach that cannot be made worse by trying to cover up the details of your mistakes.

 

This arrived last week and I wanted to say THANK YOU to Satya and Microsoft for making the best data platform on the planet:

The Europe/Middle East/Africa (EMEA) team have just finished our fourth Partner Bootcamp, and I would like to share some of our experiences from the event.

 

What is it?

For the last couple of years the EMEA team has been running a multi-day event for our channel partners. It started as short, 2-day event for a dozen or so of our distributors and resellers, hosted out of EMEA headquarters in Cork, Ireland, but has quickly grown.  In fact, we’ve outgrown our office facilities, and now run the event at a local conference centre.  Not only is it a great chance for our partners to learn about SolarWinds products and culture, it’s also an excellent way for SolarWinds to learn from our channel partners about our customers, and what they would like to see, in our products.

 

Bunnyconnellan: Dinner Venue for Monday evening’s entertainment

Bunnyconnellan: Dinner Venue for Monday evening’s entertainment

 

Who was there?

This latest event was the largest to date, with over 50 partners from all across the region, ranging from the UK, Germany, and Ireland, to South Africa, Turkey, Kenya, Bulgaria, Netherlands, and Spain. We even had Thwack MVP robertcbrowning in attendance!

From a SolarWinds perspective, it’s not just our local staff that attend the event. We had presenters from sales and support, out of our Cork and London offices, as well as folks from our Austin, Lehi, and Herndon offices.

 

What was covered?

  • What started as a small presentation forum with a single track of sessions has now evolved into a two-day sales track and four-day tech track, allowing us to cater for various staff roles at within the partner organizations. As an added bonus for our tech attendees, cal.smith and some of his team were on-site, allowing us to run the first ever beta exam of the latest iteration of the revamped SCP!

A number of sessions ran over the course of the week, but some personal highlights include:

And of course, there may have been some well-earned après-boot camp entertainment as well.

Classroom resized.jpg

Our partners prepping for the SCP beta

 

Sounds Awesome! What’s next?

We’re already going through feedback on what worked well – break-out workshops for sales, yeah! – and what didn’t work so well – some minor tech issues with the hands-on labs on the first day, boo!*

 

Of course, as with our products, we really take feedback to heart and are already planning some further evolution of the boot camp, both in terms of tech content and how we deliver it, as well as improvements in other areas as well. Stayed tuned for info on future sessions, but in the meantime you can view some of what happened over on #SWChannelBootcamp

 

Tech rainers.jpg

Here I am (left) with Cal (right) and tech trainers, Cheryl and Ed (centre).

 

 

*in our defense, we were victims of our own success in that we had about double the number of people we had planned for, but we got there in the end.

It sounds obvious, perhaps, but without configurations, our network, compute, and storage environments won't do very much for us. Configurations develop over time as we add new equipment, change architectures, improve our standards, and deploy new technologies. The sum of knowledge within a given configuration is quite high. Despite that, many companies still don't have any kind of configuration management in place, so in this article, I will outline some reasons why configuration management is a must, and look at a some of the benefits that come with having it.

 

Recovery from total loss

As THWACK users, I think we're all pretty technically savvy, yet if I were to ask right now if you had an up-to-date backup of your computer and its critical data, what would the answer be? If your laptop's hard drive died right now, how much data would be lost after you replaced it?

 

Our infrastructure devices are no different. Every now and then a device will die without warning, and the replacement hardware will need to have the same configuration that the (now dead) old device had. Where's that configuration coming from?

 

Total loss is perhaps the most obvious reason to have a system of configuration backups in place. Configuration management is an insurance policy against the worst eventuality, and it's something we should all have in place. Potential ways to achieve this include:

 

 

At a minimum, having the current configuration safely stored on another system is of value. Some related thoughts on this:

 

  • Make sure you can get to the backup system when a device has failed.
  • Back up / mirror / help ensure redundancy of your backup system.
  • If "rolling your own scripts," make sure that, say, a failed login attempt doesn't overwrite a valid configuration file (he said, speaking from experience). In other words, some basic validation is required to make sure that the script output is actually a configuration file and not an error message.

 

Archives

Better than a copy of the current configurations, a configuration archive tracks all -- or some number of -- the previous configurations for a device.

 

An archive gives us the ability to see what changes occurred to the configuration and when. If a device doesn't support configuration rollback natively, it may be possible to create a kind of rollback script based on the difference between the two latest configurations. If the configuration management tool (or other systems) can react to SNMP traps to indicate a configuration change, the archive can be kept very current by triggering a grab of the configuration as soon as a change is noted.

 

Further, home-grown scripts or configuration management products can easily identify device changes and generate notifications and alerts when changes occur. This can provide an early warning of unauthorized configurations or changes made outside scheduled maintenance windows.

 

Compliance / Audit

Internal Memo

We need confirmation that all your devices are sending their syslogs to these seventeen IP addresses.

 

-- love from, Your Friendly Internal Security Group xxx

"Putting the 'no' in Innovation since 2003"

 

A request like this can be approached in a couple of different ways. Without configuration management, it's necessary to log in to each device and check the syslog server configuration. With a collection of stored configurations, however, checking this becomes a matter of processing configurations files. Even grepping them could extract the necessary information. I've written my own tools to do the same thing, using configuration templates to allow support for the varying configuration stanzas used by different flavors of vendor and OS to achieve the same thing.

 

Some tools — Solarwinds NCM is one of them — can also compare the latest configuration against a configuration snippet and report back on compliance. This kind of capability makes configuration audits extremely simple.

 

Even without a security group making requests, the ability to audit configurations against defined standards is an important capability to have. Having discussed the importance of configuration consistency, it seems like a no-brainer to want a tool of some sort to help ensure that the carefully crafted standards have been applied everywhere.

 

Pushing configuration to devices

I'm never quite sure whether the ability to issue configuration commands to devices falls under automation or configuration management, but I'll mention it briefly here since NCM includes this capability. I believe I've said in a previous Geek Speak post that it's abstractions that are most useful to most of us. I don't want to write the code to log into a device and deal with all the different prompts and error conditions. Instead, I'd much rather hand off to a tool that somebody else wrote and say, Send this. Lemme know how it goes. If you have the ability to do that and you aren't the one who has to support it, take that as a win. And while you're enjoying the golden trophy, give some consideration to my next point.

 

Where is your one true configuration source?

Why do we fall into the trap of using hardware devices as the definitive source of each configuration? Bearing in mind that most of us claim that we're working toward building a software-defined network of some sort, it does seem odd that the configuration sits on the device. Why does it not sit in a database or other managed repository that has been programmed based on the latest approved configuration in that repo?

 

Picture this for example:

 

  • Configurations are stored in a git repo
  • Network engineers fork the repo so they have a local copy
  • When a change is required, the engineer makes the necessary changes to their fork, then issues a pull request back to the main repo.
  • Pull requests can be reviewed as part of the Change Control process, and if approved, the pull-request is accepted and merged into the configuration.
  • The repo update triggers the changes to be propagated to the end device

 

Such a process would give us a configuration archive with a complete (and commented) audit trail for each change made. Additionally, if the device fails, the latest configuration is in the git repo, not on the device, so by definition, it's available for use when setting up the replacement device. If you're really on the ball, it may be possible to do some form of integration testing/syntax validation of the change prior to accepting the pull request.

 

There are some gotchas with this, not the least of which is that going from a configuration diff to something you can safely deploy on a device may not be as straightforward as it first appears. That said, thanks to commands like Junos' load replace and load override and IOS XR's commit replace, such things are made a little easier.

 

The point of this is not really to get into the implementation details, but more to raise the question of how we think about network device configurations in particular. Compute teams get it; using tools like Puppet and Chef to build and maintain the state of a server OS, it's possible to rebuild an identical server. The same applies to building images in Docker. The configuration should not be within the image becuase it's housed in the Dockerfile. So why not network devices, too? I'm sure you'll tell me, and I welcome it.

 

Get. Configuration. Management. Don't risk being the person everybody feels pity for after their hard drive crashes.

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Ensuring that deployed U.S. troops can communicate and exchange information is critical to the military’s missions. That said, there are numerous challenges in deploying the high-speed tactical networks that make this communication possible. How, for example, do you make sure these networks are available when needed? What is the best way to maintain data integrity? The accuracy of the data—such as troop location—is just as important as network availability.

 

Network security is, of course, critical as well. Particularly with tactical Wi-Fi networks, it is crucial to ensure that our military personnel are the only ones accessing the network, and that there is no exfiltration going undetected.

 

Network Setup

In-field networks often are a simple series of antennas on trucks that create a local Wi-Fi network. Through this local Wi-Fi network the trucks “talk” to one another, as do the troops. How, then, does the federal IT professional help ensure availability, data integrity, security, and more? The answer? Advanced network monitoring, implemented through the integration of three distinct, critical capabilities.

 

Capability #1: Network performance monitoring

This is essentially monitoring the network infrastructure for performance, latency, network saturation, and more. In the event of a latency issue, managers must discern if the problem stems from the network or an application to help ensure continued, uninterrupted connections.

 

Be sure the network monitoring tool provides a physical map overlay feature, particularly important in a tactical Wi-Fi scenario, to indicate where the best network coverage is located. With this ability, someone on the move remains connected no matter what. In fact, with the addition of a heat map overlay, managers can see where coverage might be lacking and whether, for example, a truck with an antenna should move over a hill to get a better connection.

 

Capability #2: User device tracking

Within any network environment, but especially within a tactical situation, the IT team must be able to see where users are going and what applications they’re trying to access. This information dramatically enhances security, particularly data integrity, by helping ensure that exfiltration is not an issue. More advanced user device tracking also allows managers to turn devices on and turn off that might be causing bigger-picture network or latency issues.

 

Capability #3: Event monitoring and logging

Real-time logging of all network activity is critical. IT administrators must know who is accessing the network and where they’re going at all times. If an infiltration incident occurs, this is where to find it and stop it—automatically, in some cases. Event monitoring and logging gives the ability to log all events and track correlation from an infrastructure as well as a physical location, which is especially important in the field.

 

One view

Finally, but perhaps most important, be sure you can see all of the information provided from the three previous capabilities through a single integrated view. The intersection of the information received—network data AND user data AND log data—will provide the needed intelligence to make the most informed decisions during the most critical missions.

 

Find the full article on Federal News Radio.

kong.yang

What's in an IT title?

Posted by kong.yang Employee Jul 21, 2017

Continuous integration. Continuous delivery. Cloud. Containers. Microservices. Serverless. IoT. Buzzworthy tech constructs and concepts are signaling a change for IT professionals. As IT pros adapt and evolve, the application remains the center of the change storm. More importantly, the end goal for IT remains essentially the same as it always has been: keep the revenue-impacting applications performing as optimally as possible. Fundamental principles remain constant below the surface of anything new, disruptive, and innovative. This applies to IT titles and responsibilities as well.

 

Take, for example, the role of site reliability engineer (SRE), which was Ben Treynor’s 2003 creation at Google. He describes it as what happens when you ask a software engineer to perform an operations function. Google lists it as a discipline that combines software and systems engineering to build and run large-scale, massively distributed, fault-tolerant systems. Even before the coining of the term SRE, there were IT professionals who came before and built out massively distributed, fault-tolerant, large-scale systems. They just weren’t called SREs. Fast forward to 2008, and another title started to gain momentum: DevOps engineer aka the continuous integration/continuous delivery engineer. Regardless of their titles, core competencies remain fundamentally similar. 

 

Speaking of IT titles. How do you identify yourself with respect to your professional title? I've been a lab monitor, a systems engineer, a member of technical staff, a senior consultant, a practice leader, and now a Head GeekTM. Does your title bring you value? Let me know in the comment section below.

Every good engineer/administrator knows that data without context is useless data. Take income, for instance. Let’s say you are offered a position that pays $3,000 USD a month. Without context, you cannot discern whether this is a good salary for your industry, etc. Without knowing the location, the average income in your area and various tax rates that dollar amount doesn’t tell you much. Adding context to your original data set provides needed insights and perspective. This salary likely wouldn’t cover rent in San Francisco or London, but $3,000 USD would allow you to live quite comfortably in other parts of the world.

 

Can we extrapolate this example to IT systems? Imagine that a user complains that application X is performing slowly. You log into your systems and see that CPU usage on one of the servers is at 100%. If that is not the usual load on this server, you know that something is amiss. You can start looking at your virtualization environment and determine if something unusual is going on. But still, is that enough context?

 

SolarWinds Orion and PerfStack: Layering and Collating Monitoring Data in Real-time

With SolarWinds® Orion® Platform and the new PerfStack™ functionality, you can contextualize reported slow response issues with other events happening in real-time in your environment. You can, for example, look at concurrent network throughput, database transactions, and I/O load on the storage subsystem to help determine whether any I/O bottlenecks are contributing to the high load and low application performance. If you need to add or remove metrics, you can do so via drag-and-drop methods that provide instant visibility with real-time correlation.

 

The major advantage of SolarWinds Orion Platform is the wide spectrum of technology stacks that are covered when products are plugged into Orion. You are no longer limited to virtual infrastructure monitoring or storage vendor management tools. You can correlate data from various technology silos (database, storage, network, virtualization, etc.), select the relevant metrics from each of these stacks, and put them together in the PerfStack dashboard. You get a comprehensive overview and understanding of a given issue in your environment, while also having the ability to monitor the evolution of the situation in real-time.

 

Ready for Hybrid IT

PerfStack is also well-equipped to handle the modern challenges posed by hybrid IT. As enterprise, IT, and business entities embrace the cloud in its diversity, traditional operational challenges will have to change due to added complexities. Whether applications run fully in the cloud or are a mix of cloud and on-premises application components, additional stacks and network hops are added. If we look at Amazon Web Services™ (AWS®), we may have to handle not only our Amazon EC2® virtual machine instances but also, eventually, Simple Storage Service (S3™) buckets and all the network components in between. PerfStack is the perfect tool to help you discover which hybrid IT stack component is causing your performance issues.

 

An Ice-breaker for Infrastructure Silos

The fact that PerfStack leverages data from various sources helps contribute to the increase of enterprise cross-team collaboration. PerfStack becomes the central source of monitoring data, which helps bring teams together and helps break down the natural boundaries (technological, organizational, and social) between infrastructure silos. PerfStack assists service lines to help pin down the root cause of an abnormal event. Beyond that, the irrefutability and objectivity of multiple sources of data layered together stops the finger-pointing and bounce-back of support tickets between infrastructure teams. Finally, many IT professionals have had to deal with intermittent issues that nobody can resolve. PerfStack helps track these transient Probe Effect / Heisenbug issues, which affect application performance but are not easily traceable.

 

Today, the focus is more on applications. That means uptime is critical, so reducing downtime is essential to any IT department involved in supporting business functions. It adds value to organizations by dramatically reducing the time to resolution of operational incidents.

 

Whether your apps support critical lifesaving systems, financial functions, or manufacturing lines, PerfStack is a precious ally to IT professionals who are willing to make their IT department move from a cost center to a positive asset.

 

Author: Max Mortillaro, an independent data center consultant specializing in virtualization and storage technologies. Max is a four-time VMware vExpert and a Tech Field Day delegate.

I may be dating myself, but anyone else remember when MTV® played music videos? The first one they ever played was The Buggle's "Video Killed the Radio Star."  The synth-pop feel of the song seemed so out of place with the words, which outlined the demise of the age of the radio personality. Thinking back, this was the first time I can remember thinking about one new technology completely supplanting another. The corollary to this concept is that radio stars are now antiquated and unneeded.

 

Fast forward a few decades and I'm entrenched in IT. I'm happily doing my job and I hear about a new technology: virtualization. At first, I discounted it as a fad (as I'm sure many of us old-school technologists did). Then it matured, stabilized, and gained a foothold.

 

One technology again supplanted another and virtualization killed the physical server star. Did this really kill off physical servers entirely? Of course not. No more so than video killed radio. It just added a level of abstraction. Application owners no longer needed to worry about the physical hardware, just the operating system and their applications. Two things happened:

1.       Application owners had less to worry about

2.       A need for people with virtualization experience developed

 

From that point on, every new person who entered IT understood virtualization as a part of the IT stack.  It was a technology that became accepted and direct knowledge of physical servers was relegated to secondary or specialized knowledge. Having knowledge about firmware and drivers was suddenly so "retro."

 

Virtualization matured and continued to flourish, and with it, new vendors and capabilities entered the market, but dark clouds were on the horizon. Or perhaps they weren't dark-just "clouds" on the horizon. As in private clouds, hybrid clouds, public clouds, fill-in-the-blank clouds. The first vendor I remember really pushing the cloud was Amazon® with their Amazon Web ServicesTM (AWS®).

 

Thinking back, this seemed like history repeating itself. After all, according to many, Amazon nearly destroyed all brick and mortar bookstores. It looked like they were trying to do the same for on-premises virtualization. After all, why worry about the hardware and storage yourself when you can pay someone else to worry about it, right?

 

This seems reminiscent of the what happened with virtualization. You didn't worry about the physical server anymore-it became someone else's problem. You just cared about your virtual machine.

 

So, did cloud kill the virtualization star, which previously killed the server star? Of course not. For the foreseeable future, cloud will not supplant the virtualization specialist, no more so than virtualization supplanted the server specialist. It's now just a different specialization within the IT landscape.

 

What does this mean for us in IT? Most importantly, keep abreast of emerging technologies. Look to where you can extend your knowledge and become more valuable, but don't "forget" your roots.

 

You never know-one day you may be asked to update server firmware.


This is a cross-post from a post with the same name on the VMBlog.

Back from the beach and back in the saddle again. It was good to get away from the world for a week and sit by the lake or at the beach and just listen to the sound of the water. But, it's also good to get back to the keyboard, too.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Consumer Routers Report Concludes: It's a Market of Lemons

It's not just routers. It's every Wi-Fi-enabled appliance and device on the planet. For the folks who design and build such devices, security is not a top priority.

 

Azure Network Watcher

Microsoft dips its toes into the world of network monitoring. Hanselman says, "Networking people are going to go nuts over this." I'll let you decide.

 

Every Total Solar Eclipse in your Lifetime

In case you didn't know, or might not make, the one happening next month here in the United States.

 

Microsoft to employ unused TV channels to offer rural broadband

I've often commented about how Microsoft is becoming a utility company by providing a suite of cloud services. I'm now taking wagers on if they get into electricity or water next.

 

Here’s How Azure Stack Will Integrate into Your Data Center

Still concerned about putting your data in a public cloud? Worry no more! You can now have Azure in your own data center!

 

More on the NSA's Use of Traffic Shaping

I found this article interesting as I just finished the book Code Warriors while on vacation. I highly recommend it, especially for folks interested in the history of the NSA.

 

Keep security in mind on your summer vacation

Good tips, and not just for vacation. For anyone that travels frequently, or even decides to work from Starbuck's now and then.

 

Around here, it's not officially summertime until you have some Del's:

Dels.jpg

scuff

To Cloud, or Not to Cloud

Posted by scuff Jul 18, 2017

Before cloud was a thing, I looked after servers that I could physically touch. In some cities, those servers were in the buildings owned by the organization I worked for. My nation’s capital was also the IT capital at the time, and my first IT role was on the same floor as our server room and mainframe (yes, showing my age but that’s banking for you). In other cities, we rented space in someone else’s data center, complete with impressive physical access security. I have fond memories of overnight change windows as we replaced disks or upgraded operating systems. Now that I'm in the SMB world, I often miss false floors, flickering LEDs, the white noise of servers, and the constant chill of air conditioning.

 

I saw the mainframe slowly disappear and both server consolidation and virtualization shrunk our server hardware footprint. All of that was pre-cloud.

 

Now the vendors have pivoted, with a massive shift in their revenue streams to everything “as a Service.” And boy, are they letting us know. Adobe is a really interesting case study on that shift, stating that their move away from expensive, perpetually licensed box software has reinvigorated the company and assured their survival. They could have quite easily gone the way of Kodak. The vendors are laughing all the way to the bank, as their licensing peaks are flattened out into glorious, monthly recurring revenue. They couldn’t be happier.

 

But where does it leave us, the customer?

 

I want to put aside the technical aspects of the cloud (we’ll get to those in the next article) and explore other aspects of being a cloud customer. For starters, that’s a big financial shift for us. We now need less capital expenditure and more operational expenditure, which in my town means more tax deductions. Even that has pros and cons, though. Are you better off investing in IT infrastructure during the good times, not having to keep paying for it in the lean months? (This is a polarizing point of view on purpose. I'm digging for comments, so please chime in below.)

 

What about vendor lock-in? Are we comfortable with the idea that we could take our virtual servers and associated management tools from AWS and move them to Microsoft Azure, or does our reliance on a vendor’s cloud kill other options in the future? It feels a little like renegotiating that Microsoft Enterprise Agreement. Say "no" and rip out all of our Microsoft software? Does the cloud change that, or not?

 

In some areas, do we even have a choice? We can’t buy a Slack or Microsoft Teams collaboration package outright and install it on an on-premises server. Correct me if I’m wrong here, but maybe you’ve found an open source alternative?

 

So, with the technical details aside, what are our hang-ups with a cloud consumption model? Would the vendors even listen, or do they have an answer for every objection? Tell me why the cloud makes no sense or why we should all agree that it's okay that this model has become so normal.

 

Viva la cloud!

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

In last year’s third annual SolarWinds Federal Cybersecurity Survey, 38 percent of respondents indicated that the increasing use of smart cards is the primary reason why federal agencies have become less vulnerable to cyberattacks than a year ago. This 2016 survey also revealed that nearly three-fourths of federal IT professionals employ the use of smart cards as a means of network protection. And more than half of those federal IT professionals surveyed noted that smart cards are the most valuable product when it comes to network security.

 

Indeed, thanks to their versatility, prevalence, and overall effectiveness, there’s no denying that smart cards play a crucial role in providing a defensive layer to protect networks from breaches. Case in point, the attack upon the Office of Personnel Management that exposed more than 21 million personnel records. The use of smart cards could have perhaps provided sufficient security to deter such an attack.

 

But there’s increasing evidence that the federal government may be moving on from identity cards sooner than you may think. Department of Defense (DoD) Chief Information Officer Terry Halvorsen has said that he plans to phase out secure identity cards over the next two years in favor of more agile, multi-factor authentication.

 

Smart cards may be an effective first line of that defense, but they should be complemented by other security measures that create a deep and strong security posture. First, federal IT professionals should incorporate Security Information and Event Management (SIEM) into the mix. Through SIEM, managers can obtain instantaneous log-based alerts regarding suspicious network activity, while SIEM tools provide automated responses that can mitigate potential threats. It’s a surefire line of defense that must not be overlooked.

 

Federal IT professionals may also want to consider implementing network configuration management software. These tools can help improve network security and compliance by automatically detecting and preventing out-of-process changes that can disrupt network operations. Users will be able to more easily monitor and audit the myriad devices hitting their networks, and configurations can be assessed for compliance and known vulnerabilities can be easily addressed. It’s another layer of protection that goes beyond simple smart cards.

 

At the end of the day, no single tool or technology has the capability to provide the impenetrable defense that our IT networks need to prevent a breach or attack. And technology over time is continually changing. It is the duty of every federal IT professional to stay up on the latest tools and technologies out there that can make our networks safer.

 

Be sure to look at the entire puzzle when it comes to your network’s security. Know your options and employ multiple tools and technologies so that you have a well-fortified network that goes beyond identification tools that may soon be outdated anyway. That’s the really smart thing to do.

 

  Find the full article on GovLoop.

The concern for individual privacy has been growing since the 19th century when the mass dissemination of newspapers and photography became commonplace. The concerns we had then -- the right to be left alone and the right to keep private what we choose to keep private -- are now echoed in conversations carried on today about data security and privacy carried on today

 

The contemporary conversation about privacy has centered on, and ironically also has been promulgated by, the technology that’s become part of our daily life. Computer technology in particular, including the internet, mobile devices, and the development of machine learning, has enriched our lives in many ways. However, the advances of these and other technologies have grown partly due to the collection of enormous amounts of personal information. Today we must ask if the benefits, both individual and societal, are worth the loss of some semblance of individual privacy on a large scale.

 

Keep in mind that privacy is not the same as secrecy. When we use the bathroom, everyone knows what we're doing, so there's no secret. However, our use of the bathroom is still very much a private matter. On the other hand, credit card information, for most people, is considered a secret. Though some of the data that's commonly collected today might not necessarily be secret, we still must grapple with issues of privacy, or, in other words, we must grapple with the right to share or keep hidden information about ourselves.

 

An exhaustive look at the rights of the individual with regard to privacy would take volumes to analyze it's cultural, legal, and deeply philosophical foundation, but today we find ourselves doing just that. Our favorite technology services collect a tremendous amount of information about us with what we hope are well-intentioned motives. Sometimes this is done unwittingly, such as when browsing our history, or when our IP address is recorded. Sometimes these services invite us to share information, such as when we are asked to complete an online profile for a social media website.

 

Seeking to provide better products and services to customers is a worthy endeavor for a company, but concerns arise when a company doesn't secure our personal information, which puts our cherished privacy at risk. In terms of government entities and nation-states, the issue becomes more complex. The balance between privacy and security, between the rights of the individual and the safety of a society, has been the cause of great strife and even war.

 

Today's technology exacerbates this concern and fuels the fire of debate. We're typically very willing to share personal information with social media websites and in the case of retail institutions, such as e-commerce websites and online banks, secret information. Though this is data we choose to give, we do so with an element of trust that these institutions will handle our information in such a way as to sufficiently ensure its safety and our privacy.

 

Therein lies the problem. It's not that we're unwilling to share information, necessarily. The problem is with the security of that information.

 

In recent years, we’ve seen financial institutions, retail giants, hospitals, e-commerce companies, and the like all fall prey to cyber attacks that put our private and sometimes secret information at risk of compromise.

 

Netflix knows our credit card information.

 

Facebook knows our birthday, religion, sexual preference, and what we look like.

 

Google knows the content of our email.

 

Many mobile app makers know our exact geographic location.

 

Mortgage lenders know our military history and our disability status.

 

Our nations know our voting history and political affiliation.

 

We almost need to share this information to function in today's society. Sure, we could drop off the grid, but except for that sort of dramatic lifestyle change, we've come to rely on email, e-commerce, electronic medical records, online banking, government collection of data, and even social media.

 

Today, organizations, including our own employers, store information of all types, including our personal information, in distributed databases sometimes over the world. This brings in another layer of complexity. With globally distributed information, we must deal with competing cultures, values, and laws that govern the information stored within and traversing national borders.

 

The security of our information, and therefore the control of our privacy, is now almost completely out of our hands, and it's getting worse.

 

Those of us working in technology might respond by investing in secure, encrypted email services, utilizing password best practices, and choosing to avoid websites that require significant personal information. But even we, as technology professionals, use online banking, hand over tremendous private and secret information to our employers, and live in nations in which our governments collect, store, and analyze personal data on a consistent basis.

  

The larger society seems to behave similarly. There may be a moment of hesitation when entering our social security number in an online application; nevertheless, we enter and submit it. Private and public institutions have reacted to this by developing both policy and technological solutions to mitigate the risk associated with putting our personal information out there. Major components of HIPAA seek to protect individuals' medical information. PCI-DSS was created to protect individuals' credit card information in an effort to reduce credit card fraud. Many websites are moving away from unencrypted HTTP to encrypted HTTPS.

 

So it seems the climate of data security doesn't seem to be centered much on limiting the collection of information. The benefit we gain from data collection and analysis precludes our willingness to stop sharing our personal and secret information. Instead, attention is given to securing information and developing cultural best practices to protect ourselves from malicious people and insecure technology. The reaction, by and large, hasn't been to share less, but to better protect what we share.

 

In mid-2017, we see reports of cyber attacks and data breeches almost daily. These are the high-profile attacks that make the headlines, so imagine how much malicious activity is actually going on. It's clear that the current state of data security and therefore our privacy is in a state of peril. Cyber attacks and their subsequent breeches are so commonplace that they've become part of our popular culture.

 

That aspect of data security is getting worse exponentially, and since we're mostly unwilling or unable to stop sharing personal information, we must ensure that our technology and cultural practices also develop exponentially to mitigate that risk.

 

 

 

 

 

 


This version of the Actuator comes from the beaches of Rhode Island where the biggest security threat we face are sea gulls trying to steal our bacon snacks.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Someone's phishing US nuke power stations

“We don’t want you to panic, but here’s a headline to an article to make you panic.”

 

Critical Infrastructure Defenses Woefully Weak

If only there were warning signs that someone was actively trying to hack in.

 

Black Hat Survey: Security Pros Expect Major Breaches in Next Two Years

Oh, I doubt it will take a full two years for another major breach.

 

MIT researchers used a $150 Microsoft Kinect to 3D scan a giant T. rex skull

“Ferb, I know what we’re gonna do today.”

 

Tesla Loses 'Most Valuable U.S. Carmaker' Crown As Stock Takes $12 Billion Hit

I liked the original title better: Tesla stock at bargain prices!”. It’s OK Elon, just keep telling yourself that it wasn’t real money…yet.

 

Salary Gossip

Well, this stood out: “DO NOT WASTE MONEY ON AN MBA. You will make 2X more on average as an engineer.”

 

Wikipedia: The Text Adventure

This is both brilliant and horrible. If you are anything like me, you will spend 30 minutes traveling from the Statue of Liberty through lower Manhattan.

 

Just a regular night at the beach:

FullSizeRender 2.jpg

As a network engineer, I don't think I've ever had the pleasure of having every device configured consistently in a network. But what does that even mean? What is consistency when we're potentially talking about multiple vendors and models of equipment?

 

There Can Only Be One (Operating System)

 

Claim: For any given model of hardware there should be one approved version of code deployed on that hardware everywhere across an organization.

 

Response: And if that version has a bug, then all your devices have that bug. This is the same basic security paradigm that leads us to have multiple firewall tiers comprising different vendors for extra protection against bugs in one vendor's code. I get it, but it just isn't practical. The reality is that it's hard enough upgrading device software to keep up with critical security patches, let alone doing so while maintaining multiple versions of code.

Why do we care? Because different versions of code can behave differently. Default command options can change between versions; previously unavailable options and features are added in new versions. Basically, having a consistent revision of code running means that you have a consistent platform on which to make changes. In most cases, that is probably worth the relatively rare occasions on which a serious enough bug forces an emergency code upgrade.

 

Corollary: The approved code version should be changing over time, as necessitated by feature requirements, stability improvements, and critical bugs. To that end, developing a repeatable method by which to upgrade code is kind of important.

 

Consistency in Device Management

 

Claim: Every device type should have a baseline template that implements a consistent management and administration configuration, with specific localized changes as necessary. For example, a template might include:

 

  • NTP / time zone
  • Syslog
  • SNMP configuration
  • Management interface ACLs
  • Control plane policing
  • AAA (authentication, authorization, and accounting) configuration
  • Local account if AAA authentication server fails*

 

(*) There are those who would argue, quite successfully, that such a local account should have a password unique to each device. The password would be extracted from a secure location (a break glass type of repository) on demand when needed and changed immediately afterward to prevent reuse of the local account. The argument is that if the password is compromised, it will leave all devices susceptible to accessibility. I agree, and I tip my hat to anybody who successfully implements this.

 

Response: Local accounts are for emergency access only because we all use a centralized authentication service, right? If not, why not? Local accounts for users are a terrible idea, and have a habit of being left in place for years after a user has left the organization.

 

NTP is a must for all devices so that syslog/SNMP timestamps are synced up. Choose one timezone (I suggest UTC) and implement it on your devices worldwide. Using a local time zone is a guaranteed way to mess up log analysis the first time a problem spans time zones; whatever time zone makes the most sense, use it, and use it everywhere. The same time zone should be configured in all network management and alerting software.

 

Other elements of the template are there to make sure that the same access is available to every device. Why wouldn't you want to do that?

 

Corollary: Each device and software version could have its own limitations, so multiple templates will be needed, adapted to the capabilities of each device.

 

Naming Standards

 

Claim: Pick a device naming standard and stick with it. If it's necessary to change it, go back and change all the existing devices as well.

 

Response: I feel my hat tipping again, but in principle this is a really good idea. I did work for one company where all servers were given six-letter dictionary words as their names, a policy driven by the security group who worried that any kind of semantically meaningful naming policy would reveal too much to an attacker. Fair play, but having to remember that the syslog servers are called WINDOW, BELFRY, CUPPED, and ORANGE is not exactly friendly. Particularly in office space, it can really help to be able to identify which floor or closet a device is in. I personally lean toward naming devices by role (e.g. leaf, access, core, etc.) and never by device model. How many places have switches called Chicago-6500-01 or similar? And when you upgrade that switch, what happens? And is that 6500 a core, distribution, access, or maybe a service-module switch?

 

Corollary: Think the naming standard through carefully, including giving thought to future changes.

 

Why Do This?

 

There are more areas that could and should be consistent. Maybe consider things like:

 

  • an interface naming standard
  • standard login banners
  • routing protocol process numbers
  • vlan assignments
  • CDP/LLDP
  • BFD parameters
  • MTU (oh my goodness, yes, MTU)

 

But why bother? Consistency brings a number of obvious operational benefits.

 

  • Configuring a new device using a standard template means a security baseline is built into the deployment process
  • Consistent administrative configuration reduces the number of devices which, at a critical moment in troubleshooting, turn out to be inaccessible
  • Logs and events are consistently and accurately timestamped
  • Things work, in general, the same way everywhere
  • Every device looks familiar when connecting
  • Devices are accessible, so configurations can be backed up into a configuration management tool, and changes can be pushed out, too
  • Configuration audit becomes easier

 

The only way to know if the configurations are consistent is to define a standard and then audit against it. If things are set up well, such an audit could even be automated. After a software upgrade, run the audit tool again to help ensure that nothing was lost or altered during the process.

 

What does your network look like? Is it consistent, or is it, shall we say, a product of organic growth? What are the upsides -- or downsides -- to consistency like this?

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Last year, hybrid IT was the new black, at least according to the SolarWinds 2016 Public Sector IT Trends Report. In surveying 116 public sector IT professionals, we found that agencies are actively moving much of their infrastructure to the cloud, while still keeping a significant number of applications in-house. They want the many benefits of the cloud (cost efficiency, agility) without the perceived drawbacks (security, compliance).

 

Perceptions in general have evolved since our 2015 report, where 17 percent of respondents said that new technologies were “extremely important” for their agencies’ long-term successes; in 2016, that number jumped to 26 percent. Conversely, in 2015 45 percent of survey respondents cited “lack of skills needed to implement/manage” new technologies as a primary barrier to cloud adoption; in 2016, only 30 percent of respondents cited that as a problem, indicating that cloud skill sets may have improved.

 

In line with these trends, an astounding 41 percent of respondents in this year’s survey believe that 50 percent or more of their organizations’ total IT infrastructure will be in the cloud within the next three to five years. This supports the evidence that since the United States introduced the Federal Cloud Computing Strategy in 2011, there’s been an unquestionable shift toward cloud services within government IT. We even saw evidence of that way back in 2014, when that year’s IT Trends Report indicated that more than 21 percent of public sector IT professionals felt that cloud computing was the most important technology for their agencies to remain competitive.

 

However, there remain growing concerns over security and compliance. Agencies love the idea of gaining agility and greater cost efficiencies, but some data is simply too proprietary to hand over to an offsite hosted services provider, even one that is Federal Risk and Authorization Management Program (FedRAMP)-compliant. This has led many organizations to hedge their bets on which applications and how much data they wish to migrate. As such, many agency IT administrators have made the conscious decision to keep at least portions of their infrastructures on-premises. According to our research, it’s very likely that these portions will never be migrated to the cloud.

 

Thus, some applications are hosted, while others continue to be maintained within the agencies themselves, creating a hybrid IT environment that can present management challenges. When an agency has applications existing in multiple places, it creates a visibility gap that makes it difficult for federal IT professionals to completely understand what’s happening with those applications. A blind spot is created between applications hosted offsite and those maintained in-house. Administrators are generally only able to monitor internally or externally. As a result, they can’t tell what’s happening with their applications as data passes from one location to another.

 

That’s a problem in a world where applications are the government’s lifeblood, and where administrators have invested a lot of time and resources into getting a comprehensive picture of application performance. For them, it’s imperative that they implement solutions and strategies that allow them to map hybrid IT paths from source to destination. This can involve “spoofing” application traffic to get a better picture of how on-site and hosted applications are working together. Then, they can deploy security and event management tools to monitor what’s going on with the data as it passes through the hybrid infrastructure.

 

In fact, in our 2016 survey, we found that monitoring and management tools and metrics are in high demand in today’s IT environment. Forty-eight percent of our respondents recognized this combination as critically important to managing a hybrid IT infrastructure. According to them, it’s the most important skill set they need to develop at this point.

 

Next year, when we do our updated report, we’ll probably see different results, but I’m willing to bet that cloud migration will still be at the top of public sector IT managers’ to-do lists. Like polo shirts and cashmere sweaters, it’s something that won’t be going out of style anytime soon.

 

Find the full article on Federal Technology Insider.

logsuggest7.png

After a week with over 27,000 network nerds (and another week to recover), I'm here to tell you about who and what I saw (and who/what I missed) at Cisco Live US 2017.

 

The View From the Booth

Monday morning the doors opened and WE WERE MOBBED. Here are some pictures:

20170626_100259.jpg 20170626_100350.jpg

 

Our backpack promotion was a HUGE crowd pleaser and we ran out almost immediately. For those who made it in time, you've got a collector's item on your hands. For those who didn't, we're truly sorry that we couldn't bring about a million with us, although I feel like it still wouldn't have been enough.

 

Also in short supply were the THWACK socks in old and new designs.

20170626_090155.jpg

 

These were instant crowd pleasers and I'm happy to say that if you couldn't make it to our booth (or to the show in general), you can still score a pair on the THWACK store for a very affordable 6,000 points.

 

Over four days, our team of 15 people was able to hang out with over 5,000 attendees who stopped by to ask questions, find out what's new with SolarWinds, and share their own experiences, stories, and experiences from the front lines of the IT world.

 

More than ever, I believe that monitoring experts need to take a moment to look at some of the "smaller" tools that SolarWinds has to offer. While none of our sales staff will be able to buy that yacht off the commission, these won't-break-the-bank solutions pack a lot of power.

 

  • Network Topology Mapper - No, this is not just "Network Atlas" as a separate product. It not only discovers the network, it also identifies aspects of the network that network atlas doesn't catch (like channel-bonded circuits). But most of all, it MAPS the discovery automatically. And it will use industry standard icons to represent those devices on the map. No more scanning your visio diagram and then placing little green dots on the page.
  • Kiwi Syslog - My love for this tool knows no bounds. I wrote about it recently and probably drew the diagram to implement a "network filtration layer" over a dozen times on the whiteboards built into our booth.
  • Engineer's Toolkit - Visitors to the booth - both existing customers and people new to SolarWinds - were blown away when they discovered that installing this on the polling engine allows it to drill down to near-real-time monitoring of elements for intense data collection, analysis, and troubleshooting.

 

There are more tools - including the free tools - but you get the point. Small-but-mighty is still "a thing" in the world of monitoring.

 

More people were interested in addressing their hybrid IT challenges this year than I can remember in the past, including just six months ago at Cisco Live Europe. That meant we talked a lot about NetPath, PerfStack, and even the cloud-enabled aspects of SAM and VMan. At a networking show. Convergence is a real thing and it's happening, folks.

 

Also, we had in-booth representation for the SolarWinds MSP solutions that garnered a fair share of interest among the attendees, whether they thought of themselves as MSPs, or simply wanted a cloud-native solution to monitor and manage their own remote sites.

 

All Work And No Play

 

But, of course, Cisco Live is about more than the booth. My other focus this year was preparing for and taking the Cisco CCNA exam. How did I do? You'll just have to wait until the July 12th episode of SolarWinds Lab to find out.

 

But what I did discover is that taking an exam at a convention is a unique experience. The testing center is HUGE, with hundreds of test-takers at all hours of the day. This environment comes with certain advantages. You have plenty of people to study and -- if things don't go well -- commiserate with. But I also felt that the ambient test anxiety took its toll. I saw one man, in his nervousness to get to the test site, take a tumble down the escalator. Then he refused anything except a couple of Band Aids because he "just wanted to get this done."

 

In the end, my feeling is that sitting for certification exams at Cisco Live is an interesting experience, but one I'll skip from now on. I prefer the relative quiet and comfort of my local testing center, and juggling my responsibilities in the booth AND trying to ensure I had studied enough was a huge distraction.

 

What I Missed

While I got to take a few quick walks around the vendor area this year, I missed out on keynotes and sessions, another casualty of preparing for the exam. So I missed any of the big announcements or trends that may have been happening out of the line of site of booth #1111.

 

And while I tried to catch up with folks who have become part of this yearly pilgramage, I missed catching up with Lauren Friedman (@Lauren), Amy ____ (), and even Roddie Hassan (@Eiddor), who I apparently missed by about 20 minutes Thursday as I left for my flight.

 

So... That's It?

Nah, there's more. My amazing wife came with me again this year, which made this trip more like a vacation than work. While Las Vegas is no Israel in terms of food, we DID manage to have a couple of nice meals.

20170628_204610.jpg

 

It was also a smart move: Not only did she win us some extra cash:

20170628_235109.jpg

 

...she also saved my proverbial butt. I typically leave my wallet in the hotel room for the whole conference. I only realized my mistake as the bus dropped us OFF at the test center. It would have meant an hour there-and-back and missing my scheduled time to go get it. But my wife had the presence of mind to stick my wallet in her bag before we left, so the crisis was averted before I had suffered a major heart attack.

 

So if my wife and I were in Vegas, where were the kids?

 

They were back home. Trolling me.

 

Apparently the plans began a month ago, and pictures started posting to Twitter before our plane had even landed. Here are a few samples. You have to give them credit, they were creative.

tweet1.png tweet2.png tweet3.png

 

tweet4.png tweet5.png tweet6.png

 

But I got the last laugh. Their antics caught the attention of the Cisco Live social media team, who kept egging them on all week. Then, on Wednesday, they presented me with early entry passes to the Bruno Mars concert.

20170628_175227.jpg IMG_20170628_195602.jpg

 

My daughter took it all in stride:

reaction.png

 

 

Looking Ahead

Cisco Live is now firmly one of the high points of my yearly travel schedule. I'm incredibly excited for Cisco Live Europe, which will be in Barcelona this year .

But I got the ultimate revenge on my kids when it was announced that Cisco Live US 2018 will be in... Orlando. Yep, I'M GOING TO DISNEY!!

 

Outside of the Cisco Live circuit, I'll also be attending Microsoft Ignite in September, and re:Invent in November.

 

So stay tuned for more updates as they happen!

With humble admiration and praise for sqlrockstar and his weekly Actuator, I wonder if it might be time for an alternate Actuator. August, aka the "silly season," is just around the corner. The silly season's heat and high population density tend to make people behave differently than they would if they were cool, calm, and collected. That's why I am considering an alternate Actuator, one that will focus on the silly to honor August.

 

August is hot. And sweaty. And sweat is salty. And what could be better than connecting something hot with something salty, like bacon?

 

The Actuator:  Bacon Edition!

 

http://devslovebacon.com/tech_fair

For those with a penchant for tech travel, check out London's Bacon Tech Fair. "From Nodecopters to Android hacking, there is a little something for everybody."

 

https://www.usatoday.com/story/tech/columnist/2014/03/05/oscar-mayer-bacon-alarm-gadget-and-app-tech-now/6072297/

Are you cutting edge if your iPhone can't wake you up with the smell of bacon in the morning? Not according to The Oscar Mayer Institute for the Advancement of Bacon!

 

Weird Tech 3: Bacon tech | ZDNet

Seriously, from teasing your vegan daughter to making an unforgettable first impression at the TSA baggage inspection line, these business and life products masquerading as bacon are sweet. (And salty?)

 

Bacon, Francis | Internet Encyclopedia of Philosophy

What would an educational Actuator Bacon story be without some dry Bacon matter?

 

Technology Services / Technology Services

Bacon County School District is giving every student the latest version of Office. At no cost. I guess that really takes the bacon!

 

http://idioms.thefreedictionary.com/bacon

What could be better than using the internet to expand your vocabulary with bacon idioms?

 

High-tech bacon making using industrial IoT at SugarCreek - TechRepublic

Just when you thought the Internet of Things (IoT) had run out of possibilities, data-driven decision-making guides cured meat production.

 

Food+Tech Connect Hacking Meat: Tom Mylan on Better Bacon & Technology | Food+Tech Connect

You can't make this stuff up.

On the surface, application performance management (APM) is simply defined as the process of maintaining acceptable user experience with respect to any given application by "keeping applications healthy and running smoothly." The confusion comes when you factor in all the interdependencies and nuances of what constitutes an application, as well as what “good enough” is.

 

APM epitomizes the nature vs nurture debate. In this case, nurture is the environment, the infrastructure, and networking services, as well as composite application services. On the other hand, nature is the code level elements formed by the application’s DNA. The complexity of nature and nurture also plays a huge role in APM because one can nurture an application using a multitude of solutions, platforms, and services. Similarly, the nature of the application can be coded using a variety of programming languages, as well as runtime services. Regardless of nature or nurture, APM strives to maintain good application performance.

 

And therein lies the million dollar APM question: What is good performance? And similarly, what is good enough in terms of performance? Since every data center environment is unique, good can vary from organization to organization, even within the same vertical industry. The key to successful APM is to have proper baselines, trends reporting, and tracing to help ensure that Quality-of-Service (QoS) is always met without paying a premium in terms of time and resources while trying to continuously optimize an application that may be equivalent to a differential equation.

 

Let me know in the comment section what good looks like with respect to the applications that you’re responsible for.

Happy Birthday, America! I don’t care what people say, you don’t look a day over 220 years old to me. To honor you, we celebrated your birth in the same way we have for centuries: with explosives made in China. (I hope everyone had a safe and happy Fourth.)

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Volvo's self-driving cars are thwarted by kangaroos

I literally did not see that coming. Reminds me of the time my friend showed me videos of Springboks assaulting bicycle riders in South Africa.

 

California invested heavily in solar power. Now there's so much that other states are sometimes paid to take it

I don’t know about you, but I’m rooting for solar power to replace much of our power needs.

 

Latest Ransomware Wave Never Intended to Make Money

Well, that’s a bit different. Not sure why I didn’t think about how a rogue state may attack another in this manner, but it makes sense. Back up your data folks, it’s your only protection other than being off the grid completely.

 

Microsoft buys cloud-monitoring vendor Cloudyn

Most notable here is that Cloudyn helps to monitor cross-cloud architectures. So, Azure isn’t just looking to monitor itself, it’s looking to help you monitor AWS and Google, too. Take note folks, this is just the beginning of Azure making itself indispensable.

 

New Girl Scout badges focus on cyber crime, not cookie sales

Taking the “play like a girl” meme to a new level, the Girl Scouts will bring us “stop cybercrime like a girl."

 

The Problem with Data

This. So much this. Data is the most critical asset your company owns. You must take steps to protect it as much as anything else.

 

Fighting Evil in Your Code: Comments on Comments

Comments are notes that you write to "future you." If you are coding and not using comments, you are doing it wrong.

 

I was looking for an image that said "America" and I think this sums up where we are after 241 years of living on our own:

kool-aid.jpg

By Joe Kim, SolarWinds EVP, Engineering & Global CTO

 

Last year, in SolarWinds’ annual cybersecurity survey of federal IT managers, respondents listed “careless and untrained insiders” as a top cybersecurity threat, tying “foreign governments” at 48 percent. External threats may be more sensational, but for many federal network administrators, the biggest threat may be sitting right next to them.

 

To combat internal threats in your IT environment, focus your attention on implementing a combination of tools, procedures, and good old-fashioned information sharing.

 

Technology

Our survey respondents identified tools pertaining to identity and access management, intrusion prevention and detection, and security information and log and event management software as “top- tier” tools to prevent both internal and external threats. Each of these can help network administrators automatically identify potential problems and trace intrusions back to their source, whether that source is a foreign attacker or simply a careless employee who left an unattended USB drive on their desk.

 

Training

Some 16 percent of the survey respondents cited “lack of end-user security training” as a significant cause of increased agency vulnerability. The dangers, costs and threats posed by accidental misuse of agency information, mistakes and employee error shouldn’t be underestimated. Agency employees need to be acutely aware of the risks that carelessness can bring.

 

Policies

While a majority of agencies (55 percent) feel that they are just as vulnerable to attacks today as they were a year ago, the survey indicates that more feel they are less vulnerable (28 percent) than more vulnerable (16 percent), hence the need to make policies a focal point to prevent network risks. These policies can serve as blueprints that outline agencies’ overall approaches to security, but should also contain specific details regarding authorized users and the use of acceptable devices. That’s especially key in this new age of bring-your-own-anything.

 

Finally, remember that security starts with you and your IT colleagues. As you’re training others in your organization, take time to educate yourself. Read up on the latest trends and threats. Talk to your peers. Visit online forums. And see how experts and bloggers (like yours truly) are noting how the right combination of technology, training, and policies can effectively combat cybersecurity threats.

 

  Find the full article on GovLoop.

Filter Blog

By date: By tag: