Skip navigation
1 6 7 8 9 10 Previous Next

Geek Speak

2,280 posts
scuff

To Cloud, or Not to Cloud

Posted by scuff Jul 18, 2017

Before cloud was a thing, I looked after servers that I could physically touch. In some cities, those servers were in the buildings owned by the organization I worked for. My nation’s capital was also the IT capital at the time, and my first IT role was on the same floor as our server room and mainframe (yes, showing my age but that’s banking for you). In other cities, we rented space in someone else’s data center, complete with impressive physical access security. I have fond memories of overnight change windows as we replaced disks or upgraded operating systems. Now that I'm in the SMB world, I often miss false floors, flickering LEDs, the white noise of servers, and the constant chill of air conditioning.

 

I saw the mainframe slowly disappear and both server consolidation and virtualization shrunk our server hardware footprint. All of that was pre-cloud.

 

Now the vendors have pivoted, with a massive shift in their revenue streams to everything “as a Service.” And boy, are they letting us know. Adobe is a really interesting case study on that shift, stating that their move away from expensive, perpetually licensed box software has reinvigorated the company and assured their survival. They could have quite easily gone the way of Kodak. The vendors are laughing all the way to the bank, as their licensing peaks are flattened out into glorious, monthly recurring revenue. They couldn’t be happier.

 

But where does it leave us, the customer?

 

I want to put aside the technical aspects of the cloud (we’ll get to those in the next article) and explore other aspects of being a cloud customer. For starters, that’s a big financial shift for us. We now need less capital expenditure and more operational expenditure, which in my town means more tax deductions. Even that has pros and cons, though. Are you better off investing in IT infrastructure during the good times, not having to keep paying for it in the lean months? (This is a polarizing point of view on purpose. I'm digging for comments, so please chime in below.)

 

What about vendor lock-in? Are we comfortable with the idea that we could take our virtual servers and associated management tools from AWS and move them to Microsoft Azure, or does our reliance on a vendor’s cloud kill other options in the future? It feels a little like renegotiating that Microsoft Enterprise Agreement. Say "no" and rip out all of our Microsoft software? Does the cloud change that, or not?

 

In some areas, do we even have a choice? We can’t buy a Slack or Microsoft Teams collaboration package outright and install it on an on-premises server. Correct me if I’m wrong here, but maybe you’ve found an open source alternative?

 

So, with the technical details aside, what are our hang-ups with a cloud consumption model? Would the vendors even listen, or do they have an answer for every objection? Tell me why the cloud makes no sense or why we should all agree that it's okay that this model has become so normal.

 

Viva la cloud!

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

In last year’s third annual SolarWinds Federal Cybersecurity Survey, 38 percent of respondents indicated that the increasing use of smart cards is the primary reason why federal agencies have become less vulnerable to cyberattacks than a year ago. This 2016 survey also revealed that nearly three-fourths of federal IT professionals employ the use of smart cards as a means of network protection. And more than half of those federal IT professionals surveyed noted that smart cards are the most valuable product when it comes to network security.

 

Indeed, thanks to their versatility, prevalence, and overall effectiveness, there’s no denying that smart cards play a crucial role in providing a defensive layer to protect networks from breaches. Case in point, the attack upon the Office of Personnel Management that exposed more than 21 million personnel records. The use of smart cards could have perhaps provided sufficient security to deter such an attack.

 

But there’s increasing evidence that the federal government may be moving on from identity cards sooner than you may think. Department of Defense (DoD) Chief Information Officer Terry Halvorsen has said that he plans to phase out secure identity cards over the next two years in favor of more agile, multi-factor authentication.

 

Smart cards may be an effective first line of that defense, but they should be complemented by other security measures that create a deep and strong security posture. First, federal IT professionals should incorporate Security Information and Event Management (SIEM) into the mix. Through SIEM, managers can obtain instantaneous log-based alerts regarding suspicious network activity, while SIEM tools provide automated responses that can mitigate potential threats. It’s a surefire line of defense that must not be overlooked.

 

Federal IT professionals may also want to consider implementing network configuration management software. These tools can help improve network security and compliance by automatically detecting and preventing out-of-process changes that can disrupt network operations. Users will be able to more easily monitor and audit the myriad devices hitting their networks, and configurations can be assessed for compliance and known vulnerabilities can be easily addressed. It’s another layer of protection that goes beyond simple smart cards.

 

At the end of the day, no single tool or technology has the capability to provide the impenetrable defense that our IT networks need to prevent a breach or attack. And technology over time is continually changing. It is the duty of every federal IT professional to stay up on the latest tools and technologies out there that can make our networks safer.

 

Be sure to look at the entire puzzle when it comes to your network’s security. Know your options and employ multiple tools and technologies so that you have a well-fortified network that goes beyond identification tools that may soon be outdated anyway. That’s the really smart thing to do.

 

  Find the full article on GovLoop.

The concern for individual privacy has been growing since the 19th century when the mass dissemination of newspapers and photography became commonplace. The concerns we had then -- the right to be left alone and the right to keep private what we choose to keep private -- are now echoed in conversations carried on today about data security and privacy carried on today

 

The contemporary conversation about privacy has centered on, and ironically also has been promulgated by, the technology that’s become part of our daily life. Computer technology in particular, including the internet, mobile devices, and the development of machine learning, has enriched our lives in many ways. However, the advances of these and other technologies have grown partly due to the collection of enormous amounts of personal information. Today we must ask if the benefits, both individual and societal, are worth the loss of some semblance of individual privacy on a large scale.

 

Keep in mind that privacy is not the same as secrecy. When we use the bathroom, everyone knows what we're doing, so there's no secret. However, our use of the bathroom is still very much a private matter. On the other hand, credit card information, for most people, is considered a secret. Though some of the data that's commonly collected today might not necessarily be secret, we still must grapple with issues of privacy, or, in other words, we must grapple with the right to share or keep hidden information about ourselves.

 

An exhaustive look at the rights of the individual with regard to privacy would take volumes to analyze it's cultural, legal, and deeply philosophical foundation, but today we find ourselves doing just that. Our favorite technology services collect a tremendous amount of information about us with what we hope are well-intentioned motives. Sometimes this is done unwittingly, such as when browsing our history, or when our IP address is recorded. Sometimes these services invite us to share information, such as when we are asked to complete an online profile for a social media website.

 

Seeking to provide better products and services to customers is a worthy endeavor for a company, but concerns arise when a company doesn't secure our personal information, which puts our cherished privacy at risk. In terms of government entities and nation-states, the issue becomes more complex. The balance between privacy and security, between the rights of the individual and the safety of a society, has been the cause of great strife and even war.

 

Today's technology exacerbates this concern and fuels the fire of debate. We're typically very willing to share personal information with social media websites and in the case of retail institutions, such as e-commerce websites and online banks, secret information. Though this is data we choose to give, we do so with an element of trust that these institutions will handle our information in such a way as to sufficiently ensure its safety and our privacy.

 

Therein lies the problem. It's not that we're unwilling to share information, necessarily. The problem is with the security of that information.

 

In recent years, we’ve seen financial institutions, retail giants, hospitals, e-commerce companies, and the like all fall prey to cyber attacks that put our private and sometimes secret information at risk of compromise.

 

Netflix knows our credit card information.

 

Facebook knows our birthday, religion, sexual preference, and what we look like.

 

Google knows the content of our email.

 

Many mobile app makers know our exact geographic location.

 

Mortgage lenders know our military history and our disability status.

 

Our nations know our voting history and political affiliation.

 

We almost need to share this information to function in today's society. Sure, we could drop off the grid, but except for that sort of dramatic lifestyle change, we've come to rely on email, e-commerce, electronic medical records, online banking, government collection of data, and even social media.

 

Today, organizations, including our own employers, store information of all types, including our personal information, in distributed databases sometimes over the world. This brings in another layer of complexity. With globally distributed information, we must deal with competing cultures, values, and laws that govern the information stored within and traversing national borders.

 

The security of our information, and therefore the control of our privacy, is now almost completely out of our hands, and it's getting worse.

 

Those of us working in technology might respond by investing in secure, encrypted email services, utilizing password best practices, and choosing to avoid websites that require significant personal information. But even we, as technology professionals, use online banking, hand over tremendous private and secret information to our employers, and live in nations in which our governments collect, store, and analyze personal data on a consistent basis.

  

The larger society seems to behave similarly. There may be a moment of hesitation when entering our social security number in an online application; nevertheless, we enter and submit it. Private and public institutions have reacted to this by developing both policy and technological solutions to mitigate the risk associated with putting our personal information out there. Major components of HIPAA seek to protect individuals' medical information. PCI-DSS was created to protect individuals' credit card information in an effort to reduce credit card fraud. Many websites are moving away from unencrypted HTTP to encrypted HTTPS.

 

So it seems the climate of data security doesn't seem to be centered much on limiting the collection of information. The benefit we gain from data collection and analysis precludes our willingness to stop sharing our personal and secret information. Instead, attention is given to securing information and developing cultural best practices to protect ourselves from malicious people and insecure technology. The reaction, by and large, hasn't been to share less, but to better protect what we share.

 

In mid-2017, we see reports of cyber attacks and data breeches almost daily. These are the high-profile attacks that make the headlines, so imagine how much malicious activity is actually going on. It's clear that the current state of data security and therefore our privacy is in a state of peril. Cyber attacks and their subsequent breeches are so commonplace that they've become part of our popular culture.

 

That aspect of data security is getting worse exponentially, and since we're mostly unwilling or unable to stop sharing personal information, we must ensure that our technology and cultural practices also develop exponentially to mitigate that risk.

 

 

 

 

 

 


This version of the Actuator comes from the beaches of Rhode Island where the biggest security threat we face are sea gulls trying to steal our bacon snacks.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Someone's phishing US nuke power stations

“We don’t want you to panic, but here’s a headline to an article to make you panic.”

 

Critical Infrastructure Defenses Woefully Weak

If only there were warning signs that someone was actively trying to hack in.

 

Black Hat Survey: Security Pros Expect Major Breaches in Next Two Years

Oh, I doubt it will take a full two years for another major breach.

 

MIT researchers used a $150 Microsoft Kinect to 3D scan a giant T. rex skull

“Ferb, I know what we’re gonna do today.”

 

Tesla Loses 'Most Valuable U.S. Carmaker' Crown As Stock Takes $12 Billion Hit

I liked the original title better: Tesla stock at bargain prices!”. It’s OK Elon, just keep telling yourself that it wasn’t real money…yet.

 

Salary Gossip

Well, this stood out: “DO NOT WASTE MONEY ON AN MBA. You will make 2X more on average as an engineer.”

 

Wikipedia: The Text Adventure

This is both brilliant and horrible. If you are anything like me, you will spend 30 minutes traveling from the Statue of Liberty through lower Manhattan.

 

Just a regular night at the beach:

FullSizeRender 2.jpg

As a network engineer, I don't think I've ever had the pleasure of having every device configured consistently in a network. But what does that even mean? What is consistency when we're potentially talking about multiple vendors and models of equipment?

 

There Can Only Be One (Operating System)

 

Claim: For any given model of hardware there should be one approved version of code deployed on that hardware everywhere across an organization.

 

Response: And if that version has a bug, then all your devices have that bug. This is the same basic security paradigm that leads us to have multiple firewall tiers comprising different vendors for extra protection against bugs in one vendor's code. I get it, but it just isn't practical. The reality is that it's hard enough upgrading device software to keep up with critical security patches, let alone doing so while maintaining multiple versions of code.

Why do we care? Because different versions of code can behave differently. Default command options can change between versions; previously unavailable options and features are added in new versions. Basically, having a consistent revision of code running means that you have a consistent platform on which to make changes. In most cases, that is probably worth the relatively rare occasions on which a serious enough bug forces an emergency code upgrade.

 

Corollary: The approved code version should be changing over time, as necessitated by feature requirements, stability improvements, and critical bugs. To that end, developing a repeatable method by which to upgrade code is kind of important.

 

Consistency in Device Management

 

Claim: Every device type should have a baseline template that implements a consistent management and administration configuration, with specific localized changes as necessary. For example, a template might include:

 

  • NTP / time zone
  • Syslog
  • SNMP configuration
  • Management interface ACLs
  • Control plane policing
  • AAA (authentication, authorization, and accounting) configuration
  • Local account if AAA authentication server fails*

 

(*) There are those who would argue, quite successfully, that such a local account should have a password unique to each device. The password would be extracted from a secure location (a break glass type of repository) on demand when needed and changed immediately afterward to prevent reuse of the local account. The argument is that if the password is compromised, it will leave all devices susceptible to accessibility. I agree, and I tip my hat to anybody who successfully implements this.

 

Response: Local accounts are for emergency access only because we all use a centralized authentication service, right? If not, why not? Local accounts for users are a terrible idea, and have a habit of being left in place for years after a user has left the organization.

 

NTP is a must for all devices so that syslog/SNMP timestamps are synced up. Choose one timezone (I suggest UTC) and implement it on your devices worldwide. Using a local time zone is a guaranteed way to mess up log analysis the first time a problem spans time zones; whatever time zone makes the most sense, use it, and use it everywhere. The same time zone should be configured in all network management and alerting software.

 

Other elements of the template are there to make sure that the same access is available to every device. Why wouldn't you want to do that?

 

Corollary: Each device and software version could have its own limitations, so multiple templates will be needed, adapted to the capabilities of each device.

 

Naming Standards

 

Claim: Pick a device naming standard and stick with it. If it's necessary to change it, go back and change all the existing devices as well.

 

Response: I feel my hat tipping again, but in principle this is a really good idea. I did work for one company where all servers were given six-letter dictionary words as their names, a policy driven by the security group who worried that any kind of semantically meaningful naming policy would reveal too much to an attacker. Fair play, but having to remember that the syslog servers are called WINDOW, BELFRY, CUPPED, and ORANGE is not exactly friendly. Particularly in office space, it can really help to be able to identify which floor or closet a device is in. I personally lean toward naming devices by role (e.g. leaf, access, core, etc.) and never by device model. How many places have switches called Chicago-6500-01 or similar? And when you upgrade that switch, what happens? And is that 6500 a core, distribution, access, or maybe a service-module switch?

 

Corollary: Think the naming standard through carefully, including giving thought to future changes.

 

Why Do This?

 

There are more areas that could and should be consistent. Maybe consider things like:

 

  • an interface naming standard
  • standard login banners
  • routing protocol process numbers
  • vlan assignments
  • CDP/LLDP
  • BFD parameters
  • MTU (oh my goodness, yes, MTU)

 

But why bother? Consistency brings a number of obvious operational benefits.

 

  • Configuring a new device using a standard template means a security baseline is built into the deployment process
  • Consistent administrative configuration reduces the number of devices which, at a critical moment in troubleshooting, turn out to be inaccessible
  • Logs and events are consistently and accurately timestamped
  • Things work, in general, the same way everywhere
  • Every device looks familiar when connecting
  • Devices are accessible, so configurations can be backed up into a configuration management tool, and changes can be pushed out, too
  • Configuration audit becomes easier

 

The only way to know if the configurations are consistent is to define a standard and then audit against it. If things are set up well, such an audit could even be automated. After a software upgrade, run the audit tool again to help ensure that nothing was lost or altered during the process.

 

What does your network look like? Is it consistent, or is it, shall we say, a product of organic growth? What are the upsides -- or downsides -- to consistency like this?

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Last year, hybrid IT was the new black, at least according to the SolarWinds 2016 Public Sector IT Trends Report. In surveying 116 public sector IT professionals, we found that agencies are actively moving much of their infrastructure to the cloud, while still keeping a significant number of applications in-house. They want the many benefits of the cloud (cost efficiency, agility) without the perceived drawbacks (security, compliance).

 

Perceptions in general have evolved since our 2015 report, where 17 percent of respondents said that new technologies were “extremely important” for their agencies’ long-term successes; in 2016, that number jumped to 26 percent. Conversely, in 2015 45 percent of survey respondents cited “lack of skills needed to implement/manage” new technologies as a primary barrier to cloud adoption; in 2016, only 30 percent of respondents cited that as a problem, indicating that cloud skill sets may have improved.

 

In line with these trends, an astounding 41 percent of respondents in this year’s survey believe that 50 percent or more of their organizations’ total IT infrastructure will be in the cloud within the next three to five years. This supports the evidence that since the United States introduced the Federal Cloud Computing Strategy in 2011, there’s been an unquestionable shift toward cloud services within government IT. We even saw evidence of that way back in 2014, when that year’s IT Trends Report indicated that more than 21 percent of public sector IT professionals felt that cloud computing was the most important technology for their agencies to remain competitive.

 

However, there remain growing concerns over security and compliance. Agencies love the idea of gaining agility and greater cost efficiencies, but some data is simply too proprietary to hand over to an offsite hosted services provider, even one that is Federal Risk and Authorization Management Program (FedRAMP)-compliant. This has led many organizations to hedge their bets on which applications and how much data they wish to migrate. As such, many agency IT administrators have made the conscious decision to keep at least portions of their infrastructures on-premises. According to our research, it’s very likely that these portions will never be migrated to the cloud.

 

Thus, some applications are hosted, while others continue to be maintained within the agencies themselves, creating a hybrid IT environment that can present management challenges. When an agency has applications existing in multiple places, it creates a visibility gap that makes it difficult for federal IT professionals to completely understand what’s happening with those applications. A blind spot is created between applications hosted offsite and those maintained in-house. Administrators are generally only able to monitor internally or externally. As a result, they can’t tell what’s happening with their applications as data passes from one location to another.

 

That’s a problem in a world where applications are the government’s lifeblood, and where administrators have invested a lot of time and resources into getting a comprehensive picture of application performance. For them, it’s imperative that they implement solutions and strategies that allow them to map hybrid IT paths from source to destination. This can involve “spoofing” application traffic to get a better picture of how on-site and hosted applications are working together. Then, they can deploy security and event management tools to monitor what’s going on with the data as it passes through the hybrid infrastructure.

 

In fact, in our 2016 survey, we found that monitoring and management tools and metrics are in high demand in today’s IT environment. Forty-eight percent of our respondents recognized this combination as critically important to managing a hybrid IT infrastructure. According to them, it’s the most important skill set they need to develop at this point.

 

Next year, when we do our updated report, we’ll probably see different results, but I’m willing to bet that cloud migration will still be at the top of public sector IT managers’ to-do lists. Like polo shirts and cashmere sweaters, it’s something that won’t be going out of style anytime soon.

 

Find the full article on Federal Technology Insider.

logsuggest7.png

After a week with over 27,000 network nerds (and another week to recover), I'm here to tell you about who and what I saw (and who/what I missed) at Cisco Live US 2017.

 

The View From the Booth

Monday morning the doors opened and WE WERE MOBBED. Here are some pictures:

20170626_100259.jpg 20170626_100350.jpg

 

Our backpack promotion was a HUGE crowd pleaser and we ran out almost immediately. For those who made it in time, you've got a collector's item on your hands. For those who didn't, we're truly sorry that we couldn't bring about a million with us, although I feel like it still wouldn't have been enough.

 

Also in short supply were the THWACK socks in old and new designs.

20170626_090155.jpg

 

These were instant crowd pleasers and I'm happy to say that if you couldn't make it to our booth (or to the show in general), you can still score a pair on the THWACK store for a very affordable 6,000 points.

 

Over four days, our team of 15 people was able to hang out with over 5,000 attendees who stopped by to ask questions, find out what's new with SolarWinds, and share their own experiences, stories, and experiences from the front lines of the IT world.

 

More than ever, I believe that monitoring experts need to take a moment to look at some of the "smaller" tools that SolarWinds has to offer. While none of our sales staff will be able to buy that yacht off the commission, these won't-break-the-bank solutions pack a lot of power.

 

  • Network Topology Mapper - No, this is not just "Network Atlas" as a separate product. It not only discovers the network, it also identifies aspects of the network that network atlas doesn't catch (like channel-bonded circuits). But most of all, it MAPS the discovery automatically. And it will use industry standard icons to represent those devices on the map. No more scanning your visio diagram and then placing little green dots on the page.
  • Kiwi Syslog - My love for this tool knows no bounds. I wrote about it recently and probably drew the diagram to implement a "network filtration layer" over a dozen times on the whiteboards built into our booth.
  • Engineer's Toolkit - Visitors to the booth - both existing customers and people new to SolarWinds - were blown away when they discovered that installing this on the polling engine allows it to drill down to near-real-time monitoring of elements for intense data collection, analysis, and troubleshooting.

 

There are more tools - including the free tools - but you get the point. Small-but-mighty is still "a thing" in the world of monitoring.

 

More people were interested in addressing their hybrid IT challenges this year than I can remember in the past, including just six months ago at Cisco Live Europe. That meant we talked a lot about NetPath, PerfStack, and even the cloud-enabled aspects of SAM and VMan. At a networking show. Convergence is a real thing and it's happening, folks.

 

Also, we had in-booth representation for the SolarWinds MSP solutions that garnered a fair share of interest among the attendees, whether they thought of themselves as MSPs, or simply wanted a cloud-native solution to monitor and manage their own remote sites.

 

All Work And No Play

 

But, of course, Cisco Live is about more than the booth. My other focus this year was preparing for and taking the Cisco CCNA exam. How did I do? You'll just have to wait until the July 12th episode of SolarWinds Lab to find out.

 

But what I did discover is that taking an exam at a convention is a unique experience. The testing center is HUGE, with hundreds of test-takers at all hours of the day. This environment comes with certain advantages. You have plenty of people to study and -- if things don't go well -- commiserate with. But I also felt that the ambient test anxiety took its toll. I saw one man, in his nervousness to get to the test site, take a tumble down the escalator. Then he refused anything except a couple of Band Aids because he "just wanted to get this done."

 

In the end, my feeling is that sitting for certification exams at Cisco Live is an interesting experience, but one I'll skip from now on. I prefer the relative quiet and comfort of my local testing center, and juggling my responsibilities in the booth AND trying to ensure I had studied enough was a huge distraction.

 

What I Missed

While I got to take a few quick walks around the vendor area this year, I missed out on keynotes and sessions, another casualty of preparing for the exam. So I missed any of the big announcements or trends that may have been happening out of the line of site of booth #1111.

 

And while I tried to catch up with folks who have become part of this yearly pilgramage, I missed catching up with Lauren Friedman (@Lauren), Amy ____ (), and even Roddie Hassan (@Eiddor), who I apparently missed by about 20 minutes Thursday as I left for my flight.

 

So... That's It?

Nah, there's more. My amazing wife came with me again this year, which made this trip more like a vacation than work. While Las Vegas is no Israel in terms of food, we DID manage to have a couple of nice meals.

20170628_204610.jpg

 

It was also a smart move: Not only did she win us some extra cash:

20170628_235109.jpg

 

...she also saved my proverbial butt. I typically leave my wallet in the hotel room for the whole conference. I only realized my mistake as the bus dropped us OFF at the test center. It would have meant an hour there-and-back and missing my scheduled time to go get it. But my wife had the presence of mind to stick my wallet in her bag before we left, so the crisis was averted before I had suffered a major heart attack.

 

So if my wife and I were in Vegas, where were the kids?

 

They were back home. Trolling me.

 

Apparently the plans began a month ago, and pictures started posting to Twitter before our plane had even landed. Here are a few samples. You have to give them credit, they were creative.

tweet1.png tweet2.png tweet3.png

 

tweet4.png tweet5.png tweet6.png

 

But I got the last laugh. Their antics caught the attention of the Cisco Live social media team, who kept egging them on all week. Then, on Wednesday, they presented me with early entry passes to the Bruno Mars concert.

20170628_175227.jpg IMG_20170628_195602.jpg

 

My daughter took it all in stride:

reaction.png

 

 

Looking Ahead

Cisco Live is now firmly one of the high points of my yearly travel schedule. I'm incredibly excited for Cisco Live Europe, which will be in Barcelona this year .

But I got the ultimate revenge on my kids when it was announced that Cisco Live US 2018 will be in... Orlando. Yep, I'M GOING TO DISNEY!!

 

Outside of the Cisco Live circuit, I'll also be attending Microsoft Ignite in September, and re:Invent in November.

 

So stay tuned for more updates as they happen!

With humble admiration and praise for sqlrockstar and his weekly Actuator, I wonder if it might be time for an alternate Actuator. August, aka the "silly season," is just around the corner. The silly season's heat and high population density tend to make people behave differently than they would if they were cool, calm, and collected. That's why I am considering an alternate Actuator, one that will focus on the silly to honor August.

 

August is hot. And sweaty. And sweat is salty. And what could be better than connecting something hot with something salty, like bacon?

 

The Actuator:  Bacon Edition!

 

http://devslovebacon.com/tech_fair

For those with a penchant for tech travel, check out London's Bacon Tech Fair. "From Nodecopters to Android hacking, there is a little something for everybody."

 

https://www.usatoday.com/story/tech/columnist/2014/03/05/oscar-mayer-bacon-alarm-gadget-and-app-tech-now/6072297/

Are you cutting edge if your iPhone can't wake you up with the smell of bacon in the morning? Not according to The Oscar Mayer Institute for the Advancement of Bacon!

 

Weird Tech 3: Bacon tech | ZDNet

Seriously, from teasing your vegan daughter to making an unforgettable first impression at the TSA baggage inspection line, these business and life products masquerading as bacon are sweet. (And salty?)

 

Bacon, Francis | Internet Encyclopedia of Philosophy

What would an educational Actuator Bacon story be without some dry Bacon matter?

 

Technology Services / Technology Services

Bacon County School District is giving every student the latest version of Office. At no cost. I guess that really takes the bacon!

 

http://idioms.thefreedictionary.com/bacon

What could be better than using the internet to expand your vocabulary with bacon idioms?

 

High-tech bacon making using industrial IoT at SugarCreek - TechRepublic

Just when you thought the Internet of Things (IoT) had run out of possibilities, data-driven decision-making guides cured meat production.

 

Food+Tech Connect Hacking Meat: Tom Mylan on Better Bacon & Technology | Food+Tech Connect

You can't make this stuff up.

On the surface, application performance management (APM) is simply defined as the process of maintaining acceptable user experience with respect to any given application by "keeping applications healthy and running smoothly." The confusion comes when you factor in all the interdependencies and nuances of what constitutes an application, as well as what “good enough” is.

 

APM epitomizes the nature vs nurture debate. In this case, nurture is the environment, the infrastructure, and networking services, as well as composite application services. On the other hand, nature is the code level elements formed by the application’s DNA. The complexity of nature and nurture also plays a huge role in APM because one can nurture an application using a multitude of solutions, platforms, and services. Similarly, the nature of the application can be coded using a variety of programming languages, as well as runtime services. Regardless of nature or nurture, APM strives to maintain good application performance.

 

And therein lies the million dollar APM question: What is good performance? And similarly, what is good enough in terms of performance? Since every data center environment is unique, good can vary from organization to organization, even within the same vertical industry. The key to successful APM is to have proper baselines, trends reporting, and tracing to help ensure that Quality-of-Service (QoS) is always met without paying a premium in terms of time and resources while trying to continuously optimize an application that may be equivalent to a differential equation.

 

Let me know in the comment section what good looks like with respect to the applications that you’re responsible for.

Happy Birthday, America! I don’t care what people say, you don’t look a day over 220 years old to me. To honor you, we celebrated your birth in the same way we have for centuries: with explosives made in China. (I hope everyone had a safe and happy Fourth.)

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Volvo's self-driving cars are thwarted by kangaroos

I literally did not see that coming. Reminds me of the time my friend showed me videos of Springboks assaulting bicycle riders in South Africa.

 

California invested heavily in solar power. Now there's so much that other states are sometimes paid to take it

I don’t know about you, but I’m rooting for solar power to replace much of our power needs.

 

Latest Ransomware Wave Never Intended to Make Money

Well, that’s a bit different. Not sure why I didn’t think about how a rogue state may attack another in this manner, but it makes sense. Back up your data folks, it’s your only protection other than being off the grid completely.

 

Microsoft buys cloud-monitoring vendor Cloudyn

Most notable here is that Cloudyn helps to monitor cross-cloud architectures. So, Azure isn’t just looking to monitor itself, it’s looking to help you monitor AWS and Google, too. Take note folks, this is just the beginning of Azure making itself indispensable.

 

New Girl Scout badges focus on cyber crime, not cookie sales

Taking the “play like a girl” meme to a new level, the Girl Scouts will bring us “stop cybercrime like a girl."

 

The Problem with Data

This. So much this. Data is the most critical asset your company owns. You must take steps to protect it as much as anything else.

 

Fighting Evil in Your Code: Comments on Comments

Comments are notes that you write to "future you." If you are coding and not using comments, you are doing it wrong.

 

I was looking for an image that said "America" and I think this sums up where we are after 241 years of living on our own:

kool-aid.jpg

By Joe Kim, SolarWinds EVP, Engineering & Global CTO

 

Last year, in SolarWinds’ annual cybersecurity survey of federal IT managers, respondents listed “careless and untrained insiders” as a top cybersecurity threat, tying “foreign governments” at 48 percent. External threats may be more sensational, but for many federal network administrators, the biggest threat may be sitting right next to them.

 

To combat internal threats in your IT environment, focus your attention on implementing a combination of tools, procedures, and good old-fashioned information sharing.

 

Technology

Our survey respondents identified tools pertaining to identity and access management, intrusion prevention and detection, and security information and log and event management software as “top- tier” tools to prevent both internal and external threats. Each of these can help network administrators automatically identify potential problems and trace intrusions back to their source, whether that source is a foreign attacker or simply a careless employee who left an unattended USB drive on their desk.

 

Training

Some 16 percent of the survey respondents cited “lack of end-user security training” as a significant cause of increased agency vulnerability. The dangers, costs and threats posed by accidental misuse of agency information, mistakes and employee error shouldn’t be underestimated. Agency employees need to be acutely aware of the risks that carelessness can bring.

 

Policies

While a majority of agencies (55 percent) feel that they are just as vulnerable to attacks today as they were a year ago, the survey indicates that more feel they are less vulnerable (28 percent) than more vulnerable (16 percent), hence the need to make policies a focal point to prevent network risks. These policies can serve as blueprints that outline agencies’ overall approaches to security, but should also contain specific details regarding authorized users and the use of acceptable devices. That’s especially key in this new age of bring-your-own-anything.

 

Finally, remember that security starts with you and your IT colleagues. As you’re training others in your organization, take time to educate yourself. Read up on the latest trends and threats. Talk to your peers. Visit online forums. And see how experts and bloggers (like yours truly) are noting how the right combination of technology, training, and policies can effectively combat cybersecurity threats.

 

  Find the full article on GovLoop.

The title of this post raises an important question and one that seems to be on the mind of everyone who works in an infrastructure role these days. How are automation and orchestration going to transform my role as an infrastructure engineer? APIs seem to be all the rage, and vendors are tripping over themselves to integrate northbound APIs, southbound APIs, dynamic/distributed workloads, and abstraction layers anywhere they can. What does it all mean for you and the way you run your infrastructure?

 

My guess is that it probably won’t impact your role all that much.

 

I can see the wheels turning already. Some of you are vehemently disagreeing with me and want to stop reading now, because you see every infrastructure engineer only interacting with an IDE, scripting all changes/deployments. Others of you are looking for validation for holding on to the familiar processes and procedures that have been developed over the years. Unfortunately, I think both of those approaches are flawed. Here’s why:

 

Do you need to learn to code? To some degree, yes! You need to learn to script and automate those repeatable tasks that you can save time being run via script. The thing is, this isn’t anything new. If you want to be an excellent infrastructure engineer, you’ve always needed to know how to script and automate tasks. If anything, this newly minted attention being placed on automation should make it less of an effort to achieve (anyone who’s had to write expect scripts for multiple platforms should be nodding their head at this point). A focus on automation doesn’t mean that you just now need to learn how to use these tools. It means that vendors are finally realizing the value and making this process easier for the end-user. If you don’t know how to script, you should pick a commonly used language and start learning it. I might suggest Python or PowerShell if you aren’t familiar with any languages just yet.

 

Do I need to re-tool and become a programmer?  Absolutely not! Programming is a skill in and of itself, and infrastructure engineers will not need to be full-fledged programmers as we move forward. By all means, if you want to shift careers, go for it. We need full-time programmers who understand how infrastructure really works. But, automation and orchestration aren’t going to demand that every engineer learn how to write their own compilers, optimize their code for obscure processors, or make their code operate across multiple platforms. If you are managing infrastructure through scripting, and you aren’t the size of Google, that level of optimization and reusability isn’t going to be necessary to see significant optimization of your processes. You won’t be building the platforms, just tweaking them to do your will.

 

Speaking of platforms, this is the main reason why I don’t think your job is really going to change that much. We’re in the early days of serious infrastructure automation. As the market matures, vendors are going to be offering more and more advanced orchestration platforms as part of their product catalog. You are likely going to interface with these platforms via a web front end or a CLI, not necessarily through scripts or APIs. Platforms will have easy-to-use front ends with an engine on the back end that does the scripting and API calls for you. Think about this in the terms of Amazon AWS. Their IaaS products are highly automated and orchestrated, but you primarily control that automation from a web control panel. Sure, you can dig in and start automating some of your own calls, but that isn’t really required by the large majority of organizations. This is going to be true for on-premises equipment moving forward as well.

 

Final Thoughts

 

Is life for the infrastructure engineer going to drastically change because of a push for automation? I don’t think so. That being said, scripting is a skill that you need in your toolbox if you want to be a serious infrastructure engineer. The nice thing about automation and scripting is that it requires predictability and standardization of your configurations, and this leads to stable and predictable systems. On the other hand, if scripting and automation sound like something you would enjoy doing as the primary function of your job, the market has never been better or had more opportunities to do it full time. We need people writing code who have infrastructure management experience.

 

Of course, I could be completely wrong about all of this, and I would love to hear your thoughts in the comments either way.

You may be wondering why, after creating four blog posts encouraging non-coders to give it a shot, select a language and break down a problem into manageable pieces, I would now say to stop. The answer is simple, really: not everything is worth automating (unless, perhaps, you are operating at a similar scale to somebody Amazon).

 

The 80-20 Rule

 

Here's my guideline: figure out what tasks take up the majority (i.e. 80%) of your time in a given time period (in a typical week perhaps). Those are the tasks where making the time investment to develop an automated solution is most likely to see a payback. The other 20% are usually much worse candidates for automation where the cost of automating it likely outweighs the time savings.

 

As a side note, the tasks that take up the time may not necessarily be related to a specific work request type. For example, I may spend 40% of my week processing firewall requests, and another 20% processing routing requests, and another 20% troubleshooting connectivity issues. In all of these activities, I spend time identifying what device, firewall zone, or VRF various IP addresses are in, so that I can write the correct firewall rule, or add routing in the right places, or track next-hops in a traceroute where DNS is missing. In this case, I would gain the most immediate benefits if I could automate IP address research.

 

I don't want to be misunderstood; there is value in creating process and automation around how a firewall request comes into the queue, for example, but the value overall is lower than for a tool that can tell me lots of information about an IP address.

 

That Seems Obvious

 

You'd think that it was intuitive that we would do the right thing, but sometimes things don't go according to plan:

 

Feeping Creatures!

 

Once you write a helpful tool or an automation, somebody will come back and say, Ah, what if I need to know X information too? I need that once a month when I do the Y report. As a helpful person, it's tempting to immediately try and adapt the code to cover every conceivable corner case and usage example, but having been down that path, I counsel against doing so. It typically makes the code unmanageably complex due to all the conditions being evaluated and worse, it goes firmly against the 80-20 rule above. Feeping Creatures is a Spoonerism referring to Creeping Features, i.e. an always expanded feature list for a product.

 

A Desire to Automate Everything

 

There's a great story in What Do You Care What Other People Think (Richard Feynman) that talks about Mr. Frankel, who had developed a system using a suite of IBM machines to run the calculations for the atomic bomb that was being developed at Los Alamos.

 

"Well, Mr. Frankel, who started this program, began to suffer from the computer disease that anybody who works with computers now knows about. [...] Frankel wasn't paying any attention; he wasn't supervising anybody. [...] (H)e was sitting in a room figuring out how to make one tabulator automatically print arctangent X, and then it would start and it would print columns and then bitsi, bitsi, bitsi, and calculate the arc-tangent automatically by integrating as it went along and make a whole table in one operation.

 

Absolutely useless. We had tables of arc-tangents. But if you've ever worked with computers, you understand the disease -- the delight in being able to see how much you can do. But he got the disease for the first time, the poor fellow who invented the thing."

 

It's exciting to automate things or to take a task that previously took minutes, and turn it into a task that takes seconds. It's amazing to watch the 80% shrink down and down and see productivity go up. It's addictive. And so, inevitably, once one task is automated, we begin looking for the next task we can feel good about, or we start thinking of ways we could make what we already did even better. Sometimes the coder is the source of creeping features.

 

It's very easy to lose touch with the larger picture and stay focused on tasks that will generate measurable gains. I've fallen foul of this myself in the past, and have been delighted, for example, with a script I spent four days writing, which pulled apart log entries from a firewall and ran all kinds of analyses on it, allowing you to slice the data any which way and generate statistics. Truly amazing! The problem is, I didn't have a use for most of the stats I was able to produce, and actually I could have fairly easily worked out the most useful ones in Excel in about 30 minutes. I got caught up in being able to do something, rather than actually needing to do it.

 

And So...

 

Solve A Real Problem

 

Despite my cautions above, I maintain that the best way to learn to code is to find a real problem that you want to solve and try to write code to do it. Okay, there are some cautions to add here, not the least of which is to run tests and confirm the output. More than once, I've written code that seemed great when I ran it on a couple of lines of test data, but then when I ran it on thousands of lines of actual data, I discovered oddities in the input data, or in the loop that processes all the data reusing variables carelessly or similar. Just like I tell my kids with their math homework, sanity check the output. If a script claims that a 10Gbps link was running at 30Gbps, maybe there's a problem with how that figure is being calculated.

 

Don't Be Afraid to Start Small

 

Writing a Hello World! script may feel like one of the most pointless activities you may ever undertake, but for a total beginner, it means something was achieved and, if nothing else, you learned how to output text to the screen. The phrase, "Don't try to boil the ocean," speaks to this concept quite nicely, too.

 

Be Safe!

 

If your ultimate aim is to automate production device configurations or orchestrate various APIs to dance to your will, that's great, but don't start off by testing your scripts in production. Use device VMs where possible to develop interactions with different pieces of software. I also recommend starting by working with read commands before jumping right in to the potentially destructive stuff. After all, after writing a change to a device, it's important to know how to verify that the change was successful. Developing those skills first will prove useful later on.

 

Learn how to test for, detect, and safely handle errors that arise along the way, particularly the responses from the devices you are trying to control. Sanitize your inputs! If your script expects an IPv4 address as an input, validate that what you were given is actually a valid IPv4 address. Add your own business rules to that validation if required (e.g. a script might only work with 10.x.x.x addresses, and all other IPs require human input). The phrase Garbage in, garbage out, is all too true when humans provide the garbage.

 

Scale Out Carefully

 

To paraphrase a common saying, automation allows you to make mistakes on hundreds of devices much faster that you could possibly do it by hand. Start small with a proof of concept, and demonstrate that the code is solid. Once there's confidence that the code is reliable, it's more likely to be accepted for use on a wider scale. That leads neatly into the last point:

 

Good Luck Convincing People

 

It seems to me that everybody loves scripting and automation right up to the point where it needs to be allowed to run autonomously. Think of it like the Google autonomous car: for sure, the engineering team was pretty confident that the code was fairly solid, but they wouldn't let that car out on the highway without human supervision. And so it is with automation; when the results of some kind of process automation can be reviewed by a human before deployment, that appears to be an acceptable risk from a management team's perspective. Now suggest that the human intervention is no longer required, and that the software can be trusted, and see what response you get.

 

A coder I respect quite a bit used to talk about blast radius, or what's the impact of a change beyond the box on which the change is taking place? Or what's the potential impact of this change as a whole? We do this all the time when evaluating change risk categories (is it low, medium, or high?) by considering what happens if a change goes wrong. Scripts are no different. A change that adds an SNMP string to every device in the network, for example, is probably fairly harmless. A change that creates a new SSH access-list, on the other hand, could end up locking everybody out of every device if it is implemented incorrectly. What impact would that have on device management and operations?

 

However...

 

I really recommend giving programming a shot. It isn't necessary to be a hotshot coder to have success (trust me, I am not a hotshot coder), but having an understanding of coding will, I believe, will positively impact other areas of your work. Sometimes a programming mindset can reveal ways to approach problems that didn't show themselves before. And while you're learning to code, if you don't already know how to work in a UNIX (Linux, BSD, MacOS, etc.) shell, that would be a great stretch goal to add to your list!

 

I hope that this mini-series of posts has been useful. If you do decide to start coding, I would love to hear back from you on how you got on, what challenges you faced and, ultimately, if you were able to code something (no matter how small) that helped you with your job!

I'm in Vancouver, BC this week, and jet lag has given me some extra hours to get this edition of the Actuator done. Maybe while I am here I can ask Canadians how they feel about Amazon taking over the world. My guess is they won't care as long as hockey still exists, and they can get equipment with Prime delivery.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Windows 10 Source Code Leak is a Minor Incident at Best

Seems to be some inconsistent information regarding the Windows "leak" last week.

 

Recycled Falcon 9 rocket survives one of SpaceX's most challenging landings yet

I enjoy watching SpaceX make the possibility of regular space travel a real thing to happen in my lifetime.

 

The RNC Files: Inside the Largest US Voter Data Leak

Including this link not for the politics, but for the shoddy data security. Reminder: data is valuable; treat it right.

 

Is Continuing to Patch Windows XP a Mistake? -

This is a great question. I really believe that it *is* a mistake, but I also understand that Microsoft wants to help do what they can to keep users safe.

 

Amazon Is Trying to Control the Underlying Infrastructure of Our Economy

I'm not trying to alarm you or anything with this headline, but in related news, Walmart can be a jerk, too.

 

This Company Is Designing Drones To Knock Out Other Drones

Because they read the other article and know that this is the only way to save humanity.

 

Harry Potter: How the boy wizard enchanted the world

Because it's the 20th anniversary and I've enjoyed watching my kids faces light up when they watch the movies.

 

Vancouver is one of my favorites cities to visit and jog along the water:

 

FullSizeRender 1.jpg

By Joe Kim, SolarWinds EVP, Engineering & Global CTO

 

There continues to be pressure on government IT to optimize and modernize, and I wanted to share a blog written in 2016 by my SolarWinds colleague, Mav Turner.

 

Federal IT professionals are in the midst of significant network modernization initiatives that are fraught with peril. Modernizing for the cloud and achieving greater agility are attractive goals, but security vulnerabilities can all too easily spring up during the modernization phase.

 

A path paved with danger, ending in riches

 

Last year, my company, SolarWinds, released the results of a Federal Cybersecurity Survey showing that the road to modernization is marked with risk. Forty-eight percent of respondents reported that IT consolidation and modernization efforts have led to an increase in IT security issues. These primarily stem from incomplete transitions (according to 48 percent of respondents), overly complex management tools (46 percent), and a lack of training (44 percent).

 

The road to modernization can potentially lead to great rewards. Twenty-two percent of respondents actually felt that modernization can ultimately decrease security challenges. Among those, 55 percent cited the benefits of replacing old, legacy software, while another 52 percent felt that updated equipment offered a security advantage. Still more (42 percent) felt that newer software was easier to use and manage.

 

The challenge is getting there. As respondents indicated, issues are more likely to occur in the transitional period between out-with-the- old and in-with-the-new. During this precarious time, federal administrators need to be hyper aware of the dangers lurking just around the corner.

 

Here are a few strategies that can help.

 

Invest in training

 

Federal IT professionals should not trust their legacy systems or modern IT tools to someone without the proper skill sets or knowledge.

 

Workers who do not understand how to use, manage, and implement new systems can be security threats in themselves. Their inexperience can put networks and data at risk. Agencies must invest in training programs to help ensure that their administrators, both new and seasoned, are familiar with the deployment and management of modern solutions.

 

Maximize the budget

 

If the money is there, it’s up to federal CIOs to spend it wisely. Some funds may go to the aforementioned training, while others may go to onboarding new staff. Yet another portion could go to investing in new technologies that can help ease the transition from legacy to modernized systems.

 

Avoid doing too much at once

 

That transition should be gradual, as a successful modernization strategy is built a win at a time.

 

Administrators should start upgrades with a smaller set of applications or systems, rather than an entire infrastructure. As upgrades are completed, retrospective analyses should be performed to help ensure that any security vulnerabilities that were opened during the transition are now closed. Connected systems should be upgraded simultaneously. Further analyses should focus on length of time for the transition, number of staff required, and impact on operations, followed by moving on to the next incremental upgrade.

 

Find the full article on Federal News Radio.

Filter Blog

By date: By tag: