Skip navigation

Geek Speak

January 2018 Previous month Next month

In Austin this week, filming an episode of SolarWinds Lab. I heard there may be snow in the forecast there. I’m starting to get the sense that winter hates me.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Google Technique Offers Spectre Vulnerability Fix with No Performance Loss

Considering they spent six months working on this, I’m not surprised. What does surprise me is the comment about how they shared their fix with other companies, including competitors. Maybe we are starting to see the beginnings of cooperation that will result in better security for us all.

 

Timing of $24 million stock sale by Intel CEO draws fire

Move along, nothing to see here.

 

Game of Drones – Researchers devised a technique to detect drone surveillance

We aren’t far away from robot armies.

 

Cortana had a crappy CES

For all of the money that Microsoft spends on marketing the various products and services they have to offer, I am surprised that they didn’t jump at the chance to have Cortana featured at CES.

 

Power restored to CES show floor after 2-hour blackout

Well, at least Cortana wasn’t to blame. And I think this shows that we are only one 24-hour blackout away from descending into total chaos as a nation.

 

Meltdown-Spectre: Four things every Windows admin needs to do now

Good checklist to consider, especially the DON’T PANIC part.

 

The puzzle: Why do scientists typically respond to legitimate scientific criticism in an angry, defensive, closed, non-scientific way? The answer: We’re trained to do this during the process of responding to peer review.

Easily the longest title ever for an Actuator link. Have a read and think about how scientists are very, very human. We are all trained to be defensive, I find this especially true in the field of IT. I’ve certainly seen this happen in meetings and in online forums.

 

The struggle is real:

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

It’s the time of year when we look toward the future. Here's an interesting article from my colleague, Joe Kim, where he provides a few predictions.

 

Want a good idea of what’s coming next in federal IT? Look no further than the financial services industry.

 

Consider the similarities between financial firms and government agencies. Both are highly regulated, strive for greater agility, efficiency, and control of their networks and data. Also, cybersecurity remains a core necessity for organizations in both industries.

 

Technologies that have become popular in the financial services industry are making hay in federal IT. Let’s focus on three of these—blockchain, software-defined networking (SDN) and containers—and explore what they mean for agencies’ network management and security initiatives.

 

Blockchain

 

A blockchain is a digital ledger of transactions and ordered records, or blocks. It’s an easily verifiable, distributed database that can be used to keep permanent records of all the transactions that take place over a network.

 

While invented to record financial services for bitcoin transactions, blockchain can be a powerful tool for better data security. For example, governments are using blockchain to provide services to citizens. There’s even a Congressional Blockchain Caucus dedicated to educating government officials on its benefits.

 

Blockchain is far from the only solution that agencies should consider, however. Traditional network monitoring, which allows for automated threat detection across the network, and user device monitoring are still the bread and butter of network and data security.

 

SDN

 

SDN is another technology that many financial services firms and agencies have explored as a means of solidifying network security. SDNs are more easily pliable and readily adaptable to respond to evolving threat vectors. They also provide network managers with central control of the entire network infrastructure, allowing them to respond more quickly to suspicious activity.

 

But an SDN is still only as good as its network management protocols, which must be equipped to adequately handle virtual networks. Managers must be able to monitor end-to-end network analytics and performance statistics across the network, which, with SDN, are likely to be very abstract and distributed. Special care must be taken, and the appropriate tools deployed, to help ensure that managers maintain the same amount of network visibility in an SDN as they would have with a traditional network.

 

Containers

 

For organizations seeking a more streamlined approach to application development, Linux® containers are like nirvana. Essentially extremely lightweight and highly portable application development environments, containers offer the promise of much shorter development times and substantial cost savings. Because of these benefits, banking giants like Goldman Sachs® and Bank of America® are using containers and there is also growing federal government interest.

 

However, there have been concerns around container security. Because there are many different container platforms available, it is tricky to design a standard security tool that works well with all of them. Containers comprise multiple stacks and layers, each of which must be secured individually. There’s also the inherent nature of containers, which, on its surface, appears to be staunchly anti-security because of their ephemeral and transportable nature.

 

Federal developers who are considering using containers need to be aware of these security implications and risks. Although container security has gotten a lot better over the years, agencies should still consider taking steps to secure their containers or use enterprise-hardened container solutions that comply with federal guidelines and recommendations, such as those laid out in the NIST® Application Container Security Guide.

 

We clearly are in the midst of a technological revolution. While financial services and other non-government industries have thus far been the primary torchbearers for this movement, the federal government is now ready to take the lead. With blockchain, SDN, and containers, federal IT professionals have three innovative technologies to use—along with traditional network management practices—to strengthen security and innovation.

 

Find the full article on our partner DLT’s blog Technically Speaking.

When organizations first take on the challenge of setting up a disaster recovery plan, it’s almost always based on the premise that a complete failure will occur. With that in mind, we take the approach of planning for a complete recovery. We replicate our services and VMs to some sort of secondary site and go through the processes of documenting how to bring them all up again. While this may be the basis of the technical recovery portion of a DR plan, it’s important to take a step back before jumping right into the assumption of having to recover from a complete failure. Disasters come in all shapes, forms, and sizes, and a great DR plan will accommodate for as many types of disasters possible. For example, we wouldn’t use the same “runbook” to recover from simple data loss that we would use to recover from the total devastation of a hurricane. This just wouldn’t make sense. So even before beginning the recovery portions of our disaster recovery plans we really should focus on the disaster portion.

 

Classifying Disasters

 

As mentioned above, the human mind always seems to jump into planning for the worst-case scenario when hearing the words disaster recovery: a building burning down, flooding, etc. What we fail to plan for is other, minor, less significant disasters, such as temporary loss of power or loss of entrance due to quarantine. So, with that said, let’s begin to classify disasters. For the most part, we can lump a disaster into two main categories:

 

Natural Disasters – these are the most recognized types of disasters. Think of events such as a hurricane, flooding, fire, earthquake, lightning, water damage, etc. When planning for a natural disaster, we can normally go under the assumption that we will be performing a complete recovery or avoidance scenario to a secondary location.

 

Man-made Disasters – These are the types of disasters that are lesser known to organizations when looking at DR. Think about things such as temporary loss of power, cyberattacks, ransomware, protests, etc. While these intentional and unintentional acts are not as commonly approached, a good disaster recovery plan will address some of these as the recovery from them is often much different from that of a natural disaster.

 

Once we have classified our disaster into one of these two categories, we can then move on by further drilling down on the disasters. Performing a risk and impact assessment of the disaster scenarios themselves is a great next step. Answers to questions like the ones listed below should be considered when performing our risk assessment because it allows us to further classify our disasters, and, in turn, define expectations and appropriate responses accordingly.

 

  • Do we still have access to our main premises?
  • Have we lost any data?
  • Has any IT function been depleted or lost?
  • Do we have loss of skill set?

 

How these questions are answered as it pertains to a disaster can completely change our recovery scenarios. For example, if we have had a fire in the data center and lost data, we would most likely be failing over to another building in a designated amount of time. However, if we had also lost employees, more specifically IT employees in that fire, as well, then the time to recover will certainly be extended as we most likely would have lost skill sets and talent to execute the DR plan. Another great example comes in the form of ransomware. While we still would have physical access to our main premises, the data loss scenario could be much greater due to widespread encryption form the ransomware itself. If our backups were not air-gapped or separate from our infrastructure, then we may also have encrypted backups, meaning we have lost an IT function, thus provoking a possible failover scenario even with physical access to the building.  On the flip side, our risks may not even be technical in nature. What is the impact of losing physical access to our building in the result of protests or chemical spills?  Some disasters like this may not even require a recovery process at all, but still pose a threat due to the loss of access to the hardware.

 

Disaster recovery is a major undertaking, no matter what size the company or IT infrastructure, and can take copious amounts of time and resources to get it off the ground. With that said, don’t make the mistake of only planning for those big natural disasters. While it may be a great starting point, it’s best to really list out some of the more common, more probable types of disasters as well, document the risks and recovery steps in turn. In the end, you are more likely to be battling cyber attacks, power loss, and data corruption then you are to be fighting off a hurricane. The key takeaway is – classify many different disaster types, document them, and in the end, you will have a more robust, more holistic plan you can use when the time comes. I would love to hear from you in regards to your journeys with DR. How do you begin to classify disasters or construct a DR plan? Have you experienced any "uncommon" scenarios which your DR plan has or hasn't addressed? Leave some comments below and let's keep this conservation going.

Back in the saddle this week, feeling rested and ready to get 2018 started. We had quite a few interesting stories last week, too. Never a dull moment in the field of technology.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

A Simple Explanation of the Differences Between Meltdown and Spectre

In case you didn’t hear, our CPUs have been hacked. Well, they could be hacked. We should all be panicking. Or not worried at all. It’s hard to say, really, because there is a lot of misinformation going around right now about Meltdown and Spectre. This article helps clear up a few things. Also, it shows that we’ve now hit a point in time where we create logos for security vulnerabilities. What a time to be alive.

 

The No Good, Terrible Processor Flaw and SQL Server Deployments – Nearly Everything You Need To Know

Here’s a great summary of how Meltdown and Spectre may affect SQL Server workloads. There’s a lot of FUD being spread about the performance hit for the patches, this article will help you focus one the important details.

 

All the cool new friends you'll meet when you drink raw water

Here’s the upside to the diseases that raw water can carry: only hipsters inside of Silicon Valley will be affected at first.

 

The Intolerable Speech Rule: the Paradox of Tolerance for tech companies

Someone needs to get this article in front of Jack Dorsey at Twitter. It’s a simple enough rule that would remove so many of the jerks using their service.

 

You’re Descended from Royalty and So Is Everybody Else

In case you got one of those DNA kits as a gift this year, I‘m here to ruin the surprise for you. We’re all related to royalty because math.

 

A practical guide to microchip implants

I don’t think I’m ready for this future yet.

 

DHS Says 246,000 Employees' Personal Details Were Exposed

The word ‘Security’ is literally in their name, you would think they could do that part right.

 

I found Heaven here on Earth:

 

The use of cloud technology and services--especially public cloud--has become nearly ubiquitous. For example, it has made its way into even the most conservative organizations. Despite the fact that some find it challenging to support the service following adoption, the supportability resides with the public cloud provider. The business unit that decides to leverage public cloud is on their own. And while we’re at it, well done for them, because they didn’t want to use our own internal infrastructure or private cloud, if we’re a more advanced organization).

 

Sometimes It Isn't Up to IT

But to what extent does this binary (and somehow logical) vision of things hold true? The old adage that says, "If it has knobs, it’s supported by our internal IT departments" is once again proving to be correct. Even with public cloud, an infrastructure that is (hopefully) managed by a third-party provider, there are very meager chances that our organization will exonerate us from the burden of supporting any applications that run in the cloud. Chances are even slimmer for IT to push back on management decisions: they may seem inconsiderate from an IT perspective, but make sense (for better or worse) from a business perspective.

 

Challenges Ahead

With business units’ entitlement to leverage cloud services comes the question about which public clouds will be leveraged, or rather the probability that multiple cloud providers will be used without any consideration of IT supportability of the service. This makes it very difficult for IT to support and monitor the availability of services without having IT operations jump from monitoring console on cloud provider A to their on-premises solution, and then back to cloud provider B’s own panel of glass.

 

With that comes the question of onboarding IT personnel into each of the public cloud providers' IAM (Identity & Access Management) platforms, manage different sets of permissions for each of the applications and each of the platforms. This adds heavy and unnecessary management overhead on top of IT responsibilities.

 

And finally comes the relevance of monitoring the off-premises infrastructure with off-premises tools, such as those provided by public cloud operators. One potential issue, although unlikely, is the unavailability of the off-premises monitoring platform, or a major outage at the public cloud provider. Another issue could be, in the case where an internal process relies on an externally hosted application, that the off-premises application reports as being up and running at the public cloud provider, and yet is unreachable from the internal network.

 

The option of running an off-premises monitoring function exists, but it presents several risks. Beyond the operational risk of being oblivious to what is going on in case of a network outage/dysfunction (either because access to the off-premises platform is unavailable, or because the off-premises solution cannot see the on-premises infrastructure) is the more serious and insidious threat because it exposes an organization’s entire network and systems topology to a third-party. While this may be a minor problem for smaller companies, larger organizations operating in regulating markets may think twice about exposing their assets and will generally favor on-premises solutions.

 

Getting Cloud Monitoring Right

Cloud monitoring doesn’t differ from traditional on-premises infrastructure monitoring, and shouldn’t constitute a separate discipline. In the context of hybrid IT, where boundaries between on-premises and off-premises infrastructures dissolve to place applications at the crossroads of business and IT interests, there is intrinsic value to be found with on-premises monitoring of cloud-based assets.

 

A platform-agnostic approach to monitoring on-premises and cloud assets via a unified interface, backed by the consistent naming of metrics and attributes across platforms will help IT operators instantly understand what is happening, regardless of the infrastructure in which the issue is happening, and without necessarily having to understand or learn the taxonomy imposed by a given cloud provider.

 

IT departments can thus attain a holistic view that goes beyond infrastructure silos or inherent differences between clouds, and focus on delivering the value that business expects from them. Guarantee the availability and performance of business systems, regardless of their location, and ensure the monitoring function is not impacted by external events while respecting SLAs and maintaining control over their infrastructure.

By Paul Parker, SolarWinds Federal and National Government Chief Technologist

 

I'm the new Chief Technologist for our Federal and National Government team, and I’m glad to be joining the conversation on THWACK® with all of you. Here's an interesting article from my colleague, Joe Kim, in which he argues that military IT professionals can and should adopt a more proactive approach to combatting cyberattacks.

 

Today’s cyberattackers are part of a large, intelligent, and perhaps most dangerously, incredibly profitable industry. These attacks can come in all shapes and sizes and impact every type of government organization. In 2015, attackers breached the DoD network and gained access to approximately 5.6 million fingerprint records, impacting several years' worth of security clearance archives. This level of threat isn't new, but has grown noticeably more sophisticated—and regular—in recent years.

 

So why are defense organizations so vulnerable?

 

Brave new world

 

Military organizations, just like any other organizations, are susceptible to the changing tides of technology, with Warfighter Information Network-Tactical (WIN-T) offering an example of the challenges it faces. WIN-T is the backbone of the U.S. Army’s common tactical communications network, and is relied upon to enable mission command and secure reliable voice, video, and data communications at all times, regardless of location.

 

To help ensure “always on” communications, network connectivity must be maintained to allow WIN-T units to exchange information with each other and carry out their mission objectives. WIN-T was facing bandwidth delay and latency issues, resulting in outages and sporadic communications. They needed a solution that was powerful and easy to use. This is an important lesson for IT professionals tasked with adopting new and unfamiliar technology.

 

WIN-T also required detailed records of their VoIP calls to comply with regulatory requirements. Available solutions were expensive and cumbersome, so WIN-T worked with its solution provider, SolarWinds, to develop a low cost VoIP tool that met their technical mission requirements.

 

The WIN-T use case demonstrates that defense departments are looking to expand and diversify their networks and tools. This has created a new challenge for military IT professionals who must seamlessly incorporate complex new technologies that could potentially expose the organization to new vulnerabilities.

 

Impact of a breach

 

Military organizations are responsible for incredibly sensitive information, from national security details to personnel information. When the military suffers a cyberattack, there are far greater implications for it and the society as a whole.

 

If a military organization were breached, for example, and sensitive data fell into the wrong hands, the issue would become a matter of national security, and lives could be put at risk. The value of military data is astronomical, which is why attackers are growing more focused on waging cyberwarfare against military organizations. The higher the prize, the greater the ransom.

 

However, it's not all doom and gloom, and military IT professionals do have defenses to help turn the tide in the fight against cyberattackers. The trick is to be proactive.

 

Be proactive

 

Far too many organizations rely on reactive techniques to deal with cyberattacks. Wouldn't it be far less damaging to be proactive, rather than reactive? Of course, this is easier said than done, but there are ways in which military IT professionals can take a proactive approach to cybercrime.

 

First, they should apply cutting-edge technology. Outdated technologies essentially open doors for well-equipped attackers to walk through. IT professionals should be given the support needed to implement this technology, if military organizations are serious about safeguarding against cyberattacks.

 

By procuring the latest tools, and ensuring internally that departments are carrying out system updates when prompted, military organizations can help protect themselves against the sophisticated techniques of cyberattackers.

 

Second, automation should be employed by military organizations as a security tool. By automating processes—from patch management to reporting—they can help ensure an instantaneous reaction to potential threats and vulnerabilities. Automation can also help safeguard against the same type of breach in the future, providing an automated response should the same issue occur.

 

Third, all devices should be tracked within a military organization. This may sound paranoid, but many breaches are a result of insider threats, whether it's something as innocent as an end-user plugging in a USB, or something altogether more sinister.

 

Automation can be used to detect unauthorized network access from a device within the organization, enabling the system administrators to track and locate where the device is, and who may be using it.

 

Despite the fear surrounding data breaches, military organizations are capable of standing firm against the next wave of innovative, ingenious cyberattacks.

 

Find the full article on Government Computing.

iStock-545578770.jpg

 

(This is the third part of a series. You can find Part One here and Part Two here.)

 

It behooves me to remind you that there are many spoilers beyond this point. If you haven't seen the movie yet, and don't want to know what's coming, bookmark this page to enjoy later.

 

Having tools without understanding history or context is usually bad.

 

On the flipside of using tools creatively, which I will discuss in the next part of the series, is using tools without understanding their context or history.

 

There are two analogs for this in the movie. First is how Charles can't remember the Westchester Incident. He continues to operate under the assumption that Logan is tormenting him for some reason, forcing him to live in a toppled-over well, and then dragging him cross-country when they are discovered. In reality, they'd been hiding from the repercussions of Charles' psychic outburst. But lacking that knowledge, Charles is ineffectual in helping their cause.

 

The second example is "X24,” an adult clone of Logan and something of a mindless killing machine. X24 is Logan without context, without history, without a frame of reference. And therefore, he is without remorse.

 

Both of these cases exemplify the harm that can come when a tool is operated by a user who doesn't fully understand why the tool exists or everything it is designed to do. It is nmap in the hands of a script kiddy.

 

As "experienced" IT professionals (that's code for "old farts"), one of our key goals should be sharing history and context with the younger set. As I wrote in "Respect Your Elders" (https://thwack.solarwinds.com/community/solarwinds-community/geek-speak_tht/blog/2015/06/04/respect-your-elders), everything in IT has a reason and a history. Forgetting that history can not only make you less effective, it can be downright dangerous. But newcomers to our field aren't going to learn that history from books. They're going to learn it from us if we are open and willing to share.

 

Lynchpin team members become force-multipliers, even if their specific contribution wasn't the most impactful.

 

In the movie, Logan shows up at a final battle. He doesn't defeat everyone and technically all the kids should have been able to hold their own. But when he appeared, it galvanized them into working together.

 

A little earlier I mentioned that the mutant kids are able to hold their own against an army of reavers, robotically enhanced mercenaries intent on capturing and/or killing the children before they reach the Canadian border.

 

I should have mentioned that they are just barely holding their own. Before long, most are captured. It is only due to the timely arrival of Logan that they are able to regain the upper hand. And even then, Logan is the one who has to take on X24, their most powerful adversary.

 

Granted, it is Laura who ultimately ends the conflict with X24. Granted it is the kids who disarm, disable, or kill the bulk of the soldiers.

 

But Logan's appearance changes the tide of the battle. Before he arrives, the kids are being picked off one by one. The reavers control the situation, they understand each kid, and are able to neutralize their abilities with precision. After Logan appears on the scene, the reavers are fighting on two fronts and it disrupts their efforts, causes them to make careless mistakes, and ultimately costs them the fight.

 

In this moment, Logan is known as a "force multiplier," a tool, technique, or individual who dramatically increases the efficacy of the team. In effect, a force multiplier makes a group work as if they have more members, or have members with a greater range of skills, than they actually possess. While the concept is most commonly understood within military contexts, the fact is that many areas of work benefit from the presence of force multipliers.

 

In IT, we need to learn to acknowledge when a technology, technique, or even an individual (regardless of age or experience) is a force multiplier. We need to also understand that a force multiplier isn't a universal panacea. Something (or someone) who is a force multiplier in one context (day-to-day operations) isn't necessarily going to have the same effect in a different situation (rapid deployment of a new architecture).

 

It's okay to lie as long as you're telling the truth.

 

There are times in your IT career when you're going to need to lie. Not a little white "because the birthday cake is in the kitchen and we're not ready for you to come in yet" lie. Not a bending of the truth. I’m talking full-on, bald-faced lie.

 

You're going to get the email instructing you to disable someone's account at 2:00 p.m. because they're being let go. And then you're going to see that person in the hall and exchange pleasantries.

 

A co-worker will confide to you that they just got an amazing job offer, but they're not planning on giving notice for another two weeks. After that, you're going to be in a meeting with management offering staffing projections for the coming quarter, and you are going to feign acceptance that your co-worker is part of that equation.

 

Going back to the dinner scene on the farm with the Munroe family, the exchange about the school goes something like this:

Logan: “Careful, you're speaking to a man who ran a school… for a lot of years.”

Charles: “Well, that's correct. It was a… it was a kind of special needs school.”

Logan: “That's a good description.”

Charles: (indicating Logan) “He was there, too.”

Logan: “Yeah, I was in it, too. I got expelled out three times.”

Charles: “I wish I could say that you were a good pupil, but the words would choke me.”

 

From the Munroes’ point of view, this is a father and son reminiscing about their past. And you know what? It IS a father and son reminiscing about their past. All of the things they say have an emotional truth to them, even if they are a complete fabrication.

 

IT pros have access to so many systems and sources of insight that our non-IT co-workers can’t "enjoy." Therefore, we must endeavor to maintain the emotional truth of each situation, even when we have to mask the details.

 

But that isn't all I learned! Stay tuned for future installments of this series. And until then, Excelsior!

 

1 “Logan” (2017), Marvel Entertainment, distributed by 20th Century Fox

 

Welcome to 2018!

 

Just three days into the new year, Spectre and Meltdown made the news. These flaws affect both system security and performance since they degrade CPU performance significantly. Previously, we saw prominent companies use software to manipulate older generation devices. And everyone seemed to be launching ICOs and adding blockchain or bitcoins to their company portfolio to ride the cryptocurrency bubble expansion.

 

The year ahead promises to be an exciting (for lack of a better descriptor) one for IT pros, developers, DevOps practitioners, and every other role you choose to claim for yourself. Check out the teaser video below:

 

 

 

And, don't forget to check out the complete list of pro-dictions from adatole patrick.hubbard Dez sqlrockstar and myself, click the banner at the top of this post. We cover IoT, blockchain, data security, compliance, and more. Will our predictions turn out to be prophetic or will they fail to come true? Let us know what you think in the comment section below.

kpe

The Legacy IT Pro

Posted by kpe Jan 4, 2018

In the fast-paced world of IT, can you afford to be a legacy IT pro? This is a concern for many, which makes it worth examining.

 

IT functions have been clearly separated since the early days of mainframes. You had your storage team, your server team, your networking team, and so on, but is that really the way we should continue, moving forward? Do we as IT pros gain anything by keeping up with this status quo? If you and your organization stay on this path, how long do you think you can you keep it up?

 

The best way to define a legacy pro is to share a few examples. Let’s say you were hired to be on the server team in a given enterprise environment around 2008. If you have not developed your skill set beyond Microsoft® Windows Server® 2008 or any related area since then, that’s legacy. A lot has happened in nine years, especially in cloud and security sectors. That means that if you haven’t kept up with the latest technologies, you’ll likely end up being one of those legacy guys.

 

In networking, my specialty, the same definition applies. If you are a data center networking engineer and you are still doing three-tier design with spanning tree and all that good stuff, you are clearly missing out on the most recent trends.

 

So, the key take away here is, don’t be afraid to rejuvenate yourself AND the tools of your trade. Going back to our first example, ask yourself if you are really living up to your job title. Gone are the days of updating to a new software release every second year, or whatever your company policy used to be. You really need to tell your vendor of choice to go with update cycles that match the trends of the market.

 

Now that you have progressed from a legacy IT pro to the next level, how do you take this even further? My suggestion is that you evolve from being a great IT pro to being an individual who has knowledge beyond your own area of expertise. It’s probably time you started envisioning yourself as a solution engineer.

 

A recurring theme these days is for clients to want a complete solution. In other words, organizations really do not want to deal with a collection of IT silos; they’d prefer to treat IT as a whole. This means that your success as an engineer on the networking/server/storage team is not only dependent on your own performance, but also that of your fellow engineers.

 

To deliver on this promise of a solution, you really need to start getting comfortable dealing with engineers and support staff from different parts of your organization. It doesn’t matter if you work in a consultancy role or in enterprise IT, this is something you need to start gradually incorporating into your workflow.

 

I suggest you start by establishing communication lines across your organization. Be open about your own job domains and tasks. Buy that co-worker from servers a cup of coffee and be genuinely interested in his/her area of expertise. Ask questions and show appreciation for his or her work.

 

Don’t be afraid to bring this level of cooperation to the attention of management to gain some traction across multiple business units. More often than not, you will get this level of support if you offer solutions that provide value.

 

Start sharing software tools and features across silos to spark further interest and energy into this new way of thinking. Perfstack now allows you to customize panes of glass according to individual teams and groups. Why not utilize this to create a specific view for the storage team that gives them visibility into your Netflow data?

 

I am not advocating a complete abandonment of your current role. I am suggesting instead that you transform your specialization into a new multi-level sphere of expertise. If you are on the networking team, go full speed ahead with that, but also pay attention to what is happening in the world of compute and maybe storage. Read about the topic, or even get some training on it. That way you are not completely oblivious to what’s going on around you, which makes communicating across the organization even easier. Doing these things will make you a better engineer and confirm that you are a true asset to your company. In the end, isn’t that what it’s all about?

 

To summarize, I do think it’s very important to evolve in this industry. If we are to meet future demands, we need to start thinking and acting differently. By gaining new skill sets and breaking down the silos we have built up over the years, we are on a clear path of evolution. Instead of being afraid of this evolution, look at it with a positive attitude and see all the possible opportunities that arise because of it.

 

With that in mind, I wish you the very best. Take care and go forth into this new era of IT!

 

/Kim

I’m still on holiday, but that won’t stop me from getting the Actuator done this week. I hope everyone had a safe and happy holiday season with family and friends. Let’s grab 2018 by the tail, together.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

A Message to Our Customers about iPhone Batteries and Performance

I’m stunned by this response from Apple. I don’t ever recall a company standing up like this. It’s clear they know they have a bit of an image problem right now, and are taking every step possible to earn back consumer confidence.

 

The Galaxy Note 8 reportedly has a battery problem of its own

But the good news is that they aren’t catching fire...yet.

 

Computer latency: 1977-2017

Ever wonder if your old computer from childhood was faster than the one you have today? Well, wonder no more! Read to find out how the Apple 2e is the fastest machine ever built.

 

Net Promoter Score Considered Harmful (and What UX Professionals Can Do About It)

Including this because it uses math and data to prove a point about a metric that I believe is widely misunderstood to be a good thing.

 

Crime in New York City Plunges to a Level Not Seen Since the 1950s

Another link because math. It’s important to understand that having all the data doesn’t mean you can have all the answers. Nobody really knows why crime is dropping. Which means they don’t know why it will begin to rise, or when.

 

Ten years in, nobody has come up with a use for blockchain

Look for this article again next year, when the title is updated to “Eleven years”.

 

17 Things We Should Have Learned in 2017, but Probably Didn't

Wonderful list of mistakes that are likely to be mistakes again in 2018 (and beyond).

 

By Joe Kim, SolarWinds EVP, Engineering and Global CTO

 

Social media has given us many things, from the mass circulation of hilarious cat videos, to the proliferation of memes. However, social media is not commonly thought of as a tool for cybercriminals, or a possible aid in combatting cybercrime.

 

However, as government IT pros frantically spend valuable time and money bringing in complex threat-management software, one of the methods most easily used by hackers is right in front of you—assuming you’ve got your favorite social media page open.

 

Social skills

Social media can be a tool to both protect and disrupt, and attackers are eagerly screening social media profiles for any information that may present a vulnerability. Any status providing seemingly innocuous information may be of use, revealing details that could be weaponized by hackers.

 

Take LinkedIn®, for example. LinkedIn provides hackers with a resource that can be used nefariously, by viewing profiles of system administrators, attackers can learn what systems they are working on. This is a very easy way for a cybercriminal to gain valuable information.

 

As mentioned, however, social media can also be a protective tool. By helping ensure that information is correctly shared within an organization, IT pros can more easily identify and tag attackers.

 

Cybercrime is organized within a community structure, with tools and tactics doled out among cybercriminals, making attacks faster and more effective.

 

This is a method that government IT pros need to mimic by turning to threat feeds, in which attack information is quickly shared to enable enhanced threat response. Whether it’s through an IP address or more complex behavioral analysis and analytics, a threat feed can help better combat cybercrime, and shares similar traits to social media.

 

For government IT pros, the most important part of this similarity is the ability to share information with many people quickly, and in a consumable format. Then, by making this information actionable, threats can be tackled more effectively.

 

Internal affairs

The internal sharing of information is also key, but not always a priority within government. This is a real problem, especially when the rewards of more effective internal information sharing are so significant. However, unified tools or dashboards that display data about the ongoing status of agency networks and systems can help solve this problem by illuminating issues in a more effective way.

 

Take performance data, which, for example, can tell you when a sudden surge in outbound traffic occurs, indicating someone is exfiltrating data. Identifying these security incidents and ensuring that reports are more inclusive will allow the entire team to understand and appreciate how threats are discovered. This means you can be confident that your organization is vigilant, and better equipped to deal with threats.

 

Essentially, government IT professionals should think carefully about what to post on social media. This doesn’t mean, however, that they should delete their accounts or start posting under some poorly thought-out pseudonym.

 

When used correctly, social media can provide public service IT professionals with more protection and a better understanding of potential threats. In a world where cyberattacks are getting ever more devastating, any additional help is surely worthy of a like.

 

Find the full article on PublicNet.

While the Word-A-Day  Challenge has only completely it's second year, it is already a labor of love for me. Last year the idea struck (as they so often do) in an unanticipated "a-ha!" moment, and with barely enough time to see it realized. As I explained at the time, the words were re-cycled from another word-a-day challenge I take part in yearly.

 

This year was different. I had time to think and plan, and that was especially true of the list of words I wanted to present to the THWACK community. I knew they had to be special. Important. Meaningful not just as words can be in their own right, but meaningful to us in the IT world.

 

As I selected the words for the word-a-day challenge, I looked for ones with a particular feel and heft:

  1. They had to be clearly identifiable as technology words
  2. More than that, they needed to be words which have an enduring place in the IT lexicon
  3. And they needed to also be words which have a significant meaning outside of the IT context

 

In addition to hoping that words with those attributes would inspire discussion and offer each writer a variety of options for inspiration,  I was also curious to see which way the ark of conversations in the comments would bend for each. Would the community focus solely on the technical aspect? Would they avoid the tech and go for the alternate meanings? Would there be representation from both sides?

 

To put it in more concrete terms, would people choose to write about backbone as an aspect of biology, technology, or character? Would Bootstrap appeal to folks more as a method or a metaphor?

 

To say that the THWACK community exceeded my wildest imaginings would actually be understatement (a crime I've rarely been accused of). Here at the end of 31 days of the challenge, the answer to my question is a resounding "all of the above". In writing, images, poems, and haiku, you left no intellectual stone un-turned.

 

More than that, however, was how so many of us took a technical idea and suggested ways we could use the same concepts to improve ourselves; or conversely, how we could take the non-technical meaning of a word and apply THAT to our technical lives. And through it all was a constant message of "we can do better. we can be better. we have so much more to learn. we have so much more to do."

 

And even more fundamentally, the message I read time and time again was "we can get there together. as a community. we can help each other be better."

 

For me, it brought to mind a quote by Michael Walzer:

"We still believe, or many of us do, what the Exodus first taught...

- first, that wherever you live, it is probably Egypt;

- second, that there is a better place, a world more attractive, a promised land;

- and third, that 'the way to the land is through the wilderness'.

There is no way to get from here to there except by joining together and marching."

 

 

I would like to thank everyone who took time out of their hectic end-of-year schedules - sometimes in their personal time over evenings and weekends - to comment so thoughtfully. And in that same vein I'm deeply grateful to the 22 writers who generated the 31 "lead" articles - 12 of whom this year came from the ranks of our incredible, inimitable, indefatigable THWACK MVP's. If you missed out on any of the days, I'm listing each post below to give you yet another chance to catch up.

 

Finally, I want to give a shout-out to the dedicated THWACK community team for helping manage all the behind-the-scenes work that allowed the challenge to go off without a hitch this year.

 

I am humbled to have had a chance to be part of this, and I'm already thinking about the words, ideas, and stories I hope we can share in the coming year.

*************

Leon Adato
Eric CourtesyIT
Peter Monaghan, CBCP, SCP, ITIL ver.3
Joshua Biggley
Craig Norborg
Ben Garves
Kamil Nepsinsky
Richard Letts
Kevin Sparenberg
Jeremy Mayfield
Patrick Hubbard
Rob Mandeville
Karla Palma
Ann Guidry
Matt R
Jenne Barbour
Thomas Iannelli
Allie Eby
Richard Schroeder
Jenne Barbour
Abigail Norman
Mark Roberts
Zack Mutchler
Rainy Schermerhorn
Shelly Crossland
Jez Marsh
Michael Probus
Jenne Barbour
Jenne Barbour
Erik Eff
Leon Adato

Filter Blog

By date: By tag: