1 2 3 Previous Next

Geek Speak

2,496 posts

Many of us have or currently operate in a stovepipe or silo IT environment. For some this may just be a way of professional life, but regardless of how the organizational structure is put together, having a wide and full understanding of any environment will lend itself to a smoother and more efficient overall system. As separation of duties continues to blur in the IT world, it is becoming increasingly important to shift how we as systems and network professions view the individual components and the overall ecosystem. As such changes and tidal shifts occur, linux appears in the switching and routing infrastructure, servers consuming BGP feeds and making intelligent routing choices, creating orchestration workflows that automate the network and the services it provides - all of these things are slowly creeping into more enterprises, more data centers, more service providers. What does this mean for the average IT engineer? It typically means that we, as professionals, need to keep abreast of workflows and IT environments as a holistic system rather than a set of distinct silos or disciplines.

 

This mentality is especially important in monitoring aspects of any IT organization, and it is a good habit to start even before these shifts occur. Understanding the large scale behavior of IT in your environment will allow engineers and practitioners to accomplish significantly more with less - and that is a win for everyone. Understanding how your servers interact with the DNS infrastructure, the switching fabric, the back end storage, and the management mechanisms (i.e. hand crafted curation of configurations or automation) naturally lends itself to faster mean time to repair due to a deeper understanding of an It organization, rather than a piece, or service that is part of it.

 

One might think “I don’t need to worry about linux on my switches and routing on my servers”, and that may be true. However, expanding the knowledge domain from a small box to a large container filled with boxes will allow a person to no just understand the attributes of their box, but the characteristics of all of the boxes together. For example, understanding that the new application will make a DNS query for every single packet the application sees, when past applications did local caching can dramatically decrease the downtime that occurs when the underlying systems hosting DNS become overloaded and slow to respond. The same can be said for moving to cloud services: Having a clear baseline of link traffic - both internal and external - will make obvious that the new cloud application requires more bandwidth and perhaps less storage.

 

Fear not! this is not a cry to become a developer or a sysadmin. It's not a declaration that there is a hole in the boat or a dramatic statement that "IT as we know it is over!". Instead, it is a suggestion to look at your IT environment in a new light. See it as a functioning system rather than a set of disjointed bits of hardware with different uses and diverse managing entities (i.e. silos). The network is the circulatory system, the servers and services are the intelligence. The storage is the memory and the security is the skin and immune system. Can they stand alone on technical merit? Not really. When they work in concert, is the world a happier place to be? Absolutely. Understand the interactions. Embrace the collaborations. Over time, when this can happen, the overall reliability will be far, far higher.

 

Now, while some of these correlations may seem self evident, piecing them together, and more importantly tracking them for trends and patterns has the high potential to dramatically increase the occurrence of better informed and fact based decisions overall, and that makes for a better IT environment.

In the first post of this blog series, we’ll cover the fundamentals of cybersecurity, and understanding basic terminology so you can feel comfortable “talking the talk.” Over the next few weeks, we’ll build on this introductory knowledge, and review more complex terms and methodologies that will help you build confidence in today’s ever-evolving threat landscape.

 

To start, here are some of the foundational terms and their definitions in the world cybersecurity.

 

Risk: Tied to any potential financial loss, disruption, or damage to the reputation of an organization from some sort of failure of its information technology systems.

 

Threat: Any malicious act that attempts to gain access to a computer network without authorization or permission from the owners.

 

Vulnerability: A flaw in a system that can leave it open to attack. This refers to any type of weakness in a computer system, or an entity’s processes and procedures that leaves information security exposed to a threat.

 

Exploit: As a noun, it’s an attack on a computer system that takes advantage of a particular vulnerability that has left the system open to intruders. Used as a verb, exploit refers to the act of successfully perpetrating such an attack.

 

Threat Actor: Also known as a malicious actor, it’s an entity that is partially or wholly responsible for an incident that affects, or has the potential to affect, an organization's security. Examples of potential threat actors include: cybercriminals, state-sponsored actors, hacktivists, systems administrators, end-users, executives, and partners. Note that while some of these groups are obviously driven by malicious objectives, others may become threat actors through inadvertent compromise.

 

Threat Actions: What threat actors do or use to cause or contribute to a security incident. Every incident has at least one, but most will be comprised of multiple actions. Vocabulary for Event Recording and Incident Sharing (VERIS) uses seven threat action categories: Malware, Hacking, Social, Misuse, Physical, Error, and Environmental.

 

Threat Vector: A path or tool that a threat actor uses to attack the target.

 

 

Now let’s look at how these basic terms become part of a more complex cybersecurity model. You’ve probably heard about the Cyber Kill Chain. This model outlines the various stages of a potentially successful attack. The best-known version of this model is the Lockheed Martin Kill Chain, including several phases.

 

Reconnaissance – Research, identification, and selection of targets, often represented as crawling internet websites, like social networks, organizational conferences, and mailing lists for email addresses, social relationships, or information on specific technologies.

 

Weaponization – Coupling a remote access Trojan with an exploit into a deliverable payload. Most commonly, application data files, such as PDFs or Microsoft Office documents, serve as the weaponized deliverable.

 

Delivery – Transmission of the weapon to the targeted environment via, for example, email attachments, websites, and USB removable media.

 

Exploitation – After payload delivery to victim host, exploitation triggers the intruders’ code. Exploitation targets an application or operating system vulnerability, or leverages an operating system feature that auto-executes code.

 

Installation – Installation of a remote access Trojan or backdoor on the victim system allows the adversary to maintain persistence inside the environment.

 

Command and Control – Advanced Persistent Threat (APT) malware typically establishes remote command and control channels so that intruders have “hands on the keyboard” access inside the target environment.

 

Actions on Targets – Typically the prime objective is data exfiltration, involving collecting, encrypting, and extracting information from the victim environment. Intruders may only seek access to a victim box for use as a jump point to compromise additional systems, and move laterally inside the network or attack other partner organizations.

 

The goal of any attack detection methodology is to identify a threat in as early a stage of the kill chain as possible. In subsequent blogs—as we build upon these foundational definitions and cover things such as attack surfaces and protection mechanisms—we will refer back to the phases of the kill chain when discussing certain threats, like malware and the role of protections such as IPS.

 

Note that as threat vectors have evolved and changed, the kill chain—although a good resource as a starting point—no longer covers all possibilities. This ensures that the job of a cybersecurity professional will never remain static.

 

Useful References:

http://veriscommunity.net

https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html (This site uses cookies.)

The World Cup is over, but can France really be happy to win a tournament for which the USA didn't qualify?

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Evaluating the Evaluation: A Benchmarking Checklist

Wonderful thoughts here on a checklist with regards to benchmarks and scalability. Wish I could go back in time and hand this to some developers that struggled with getting their code to scale.

 

Forget Legroom—Air Travelers Are Fed Up With Seat Width

Agreed, I’d prefer more width and elbow room. However, it’s legroom that really defines for me if I can work on my laptop. Paying for a business class seat should mean that you have room to work.

 

Unfollowing Everybody

I like this strategy, and I’ve done this a few times over the years. However, I’ve not thought about scripting it out, or using Excel to help drive my decisions about who to keep following.

 

Netflix Stuns HBO: Emmy Nominations by the Numbers

Reading this article made me realize that HBO and Netflix are clones of each other. Both companies were founded to provide media to our homes, one through cable and the other using the Post Office. Then, they both executed a pivot to be more than a distributor, they started creating the content they distribute. And now Netflix takes the lead, mostly due to their data-driven culture.

 

Are we truly alone in the cosmos? New study casts doubt on rise of alien life in our galaxy

The Fermi Paradox doesn’t get talked about enough. Probably because it can be a bit depressing to realize we are alone.

 

Apple’s most expensive MacBook Pro now costs $6,700

If you were wondering what to get a Geek like me for Christmas.

 

Burglar stuck in Vancouver escape room panics, calls 911

Seriously though, those rooms can be tough, and you are usually allowed a hint or two. Maybe that’s why he called.

 

Humbled and honored to be selected as a Microsoft MVP for the tenth consecutive year:

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Here is an interesting article from my colleague Joe Kim, in which he discusses how technology drives military asset management.

 

Military personnel need to be able to easily manage the lifecycle of their connected assets, from creation to maintenance to retirement. They can do this by creating a digital representation of a physical object, like a troop transport, which they can use for a number of purposes, including monitoring the asset’s health status, movements, location, and more.

 

The concept behind these “digital twins” was first presented in 2002 during a University of Michigan presentation by Dr. Michael Grieves, who posited that there are two systems: one physical, the other a digital representation that contained all of the information about the physical system. His thought was that the digital twin could be used to monitor and support the entire life cycle of its physical sibling and, in the process, keep that sibling functioning and healthy.

 

Digitizing a vehicle

 

Consider a military vehicle that has just rolled off the assembly line and is ready to be commissioned.

Getting the most out of this asset requires consistent maintenance. Ideally, that maintenance can be performed proactively to prevent any potential breakdowns. It can be difficult to know or keep track of when the vehicle may need maintenance, and impossible to predict when a breakdown may occur.

 

Fortunately, the data collected by the various sensors contained within the vehicle can be used to create a digital twin. This representation can provide a very clear picture, in real time, of its status.

Further, by collecting this information over time, the digital twin has the ability to create an evolving yet extraordinarily accurate picture of how the vehicle will perform in the future. As the sensors continue to report information, the digital twin continues to learn, model, and adapt its prediction of future performance.

 

This information can help teams in a number of ways. The analytics derived from historical performance data can be used to point to potential warning signs and predict failures before they occur, thereby helping avoid unwanted downtime. Data can also be used to diagnose a problem and even, in some cases, solve the issue remotely. At the least, digital twins can be used to help guide soldiers and repair specialists to quickly fix the problem on the ground.

 

The life cycle management process also becomes much more efficient. Digital twins can help simplify and accelerate management of a particular thing, in this case, a physical entity like a vehicle.

 

Taking the next step

 

The digital twin concept is a logical next step to consider for defense agencies that have already begun investing in software-defined services. These services are designed to simplify and accelerate the management of core technology concepts, including computing, storage, and networking. The idea is to improve the management of each of these concepts throughout their life cycles, from planning and design through production, deployment, maintenance, and, finally, retirement.

 

Digital twins take this concept a step further by applying it to physical objects. It’s an evolution for the military’s ever-growing web of connectivity. Digital twins, and the data analysis they depend on, can open the doors to more efficient and effective asset lifecycle management.

 

Find the full article on SIGNAL.

A recent conversation on Twitter struck a nerve with me. The person posited that,

 

"If you're a sysadmin, you're in customer service. You may not realise it, but you are there TO SERVE THE CUSTOMER. Sure that customer might be internal to your organisation/company, but it's still a customer!"

 

A few replies down the chain, another person posited that,

 

"Everyone you interact with is a customer."

 

I would like to respectfully (and pedantically) disagree.

 

First, let's clear something up: The idea of providing a "service," which could be everything from a solution to an ongoing action to consultative insight, and providing it with appropriate speed, professionalism, and reliability, is what we in IT should always strive to do. That doesn't mean (as other discussions on the Twitter thread pointed out) that the requester is always right; that we should drop everything to serve the requester's needs; that we must kowtow to the requester's demands. It simply means that we were hired to provide a certain set of tasks, to leverage our expertise and insight to help enable the business to achieve its goals.

 

And when people say, "you are in customer service" that is usually what they mean. But I wish we'd all stop using the word "customer." Here is why:

 

Saying someone is a customer sets up a collection of expectations in the mind of both the speaker and the listener that don’t reflect the reality of corporate life.

 

As an external service provider—a company hired to do something—I have customers who pay me directly to provide services. But I can prioritize which customers get my attention and which don’t. I can “fire” abusive customers by refusing to serve them; or I can prohibitively price my services for “needy” customers so that either they find someone else or I am compensated for the aggravation they bring me. I can choose to specialize in certain areas of technology, and then change that specialization down the road when it’s either not lucrative or no longer interesting to me. I can follow the market, or stay in my niche. These are all the things I can do as an external provider who has ACTUAL customers.

 

Inside a company, I can do almost none of those things. I might be able to prioritize my work somewhat, but at the end of the day I MUST service each and every person who requests my help. I cannot EVER simply choose to not help or provide service to a coworker. I can put them off, but eventually I have to get to their request. Since I’m not charging them anything, I can’t price my services in a way that encourages abusive requestors to go elsewhere. Even in organizations that have a chargeback system for IT services, that charge rate must be equal across the board. I can’t charge more to accounting and less to legal. Or more to Bob and less to Sarah. The services I provide internally are pre-determined by the organization itself. No matter how convinced I am that “the future is cloud,” I’m stuck building, racking, and stacking bare-metal servers in our data center until the company decides to change direction.

 

Meanwhile, for the person receiving those services, as a customer, there’s quite a range of options. Foremost among these is that I can fire a provider. I can put out an RFP and pick the provider who offers me the best services for my needs. I can haggle on price. I can set an SLA with monetary penalties for non-compliance. I can select a new technical direction, and if my current provider is not experienced, I can bring in a different one.

 

But as an internal staff requesting service from the IT department, I have almost none of those options. I can’t “fire” my IT department. Sure, I might go around the system and bring in a contractor to build a parallel, “shadow IT” structure. But at the end of the day, I’m going to need to have an official IT person get me into Active Directory, route my data, set up my database, and so on. There’s only so much a shadow IT operation can do before it gets noticed (and shut down). I can’t go down the street and ask the other IT department to give me a second bid for the same services. I can’t charge a penalty when my IT department doesn’t deliver the service they said they would. And if I (the business “decider”) choose to go a new technical route, I must wait for the IT department to catch up or bring in consultants NOT to replace my IT department, but to cover the gap until they get up to speed.

 

Whether we mean to or not, whether we like it or not, and whether you agree with me or not, I have found that using the word "customer" conjures at least some of those expectations.

 

But there’s one other giant issue when you use the word “customer,” and that’s the fact that people often confuse “customer” with “consumer.” That’s not an IT issue, that’s a life issue. The thing to keep in mind is that the customer is the person who pays for the service. The consumer is the person who receives (enjoys) the service. And the two are not always the same. I’m not just talking about taking my kids out to ice cream.

 

A great example is the NFL. According to Wikipedia, the NFL television blackout policies were, until they were largely over-ridden in 2014, the strictest among North American sports leagues. In brief, the blackout rules state that “…a home game cannot be televised in the team's local market if all tickets are not sold out 72 hours prior to its start time.” Prior to 1973, this blackout rule applied to all TV stations within a 75-mile radius of the game.

 

How is this possible? Are we, the fans, not the customers of football? Even if I’m not going to THIS game, I certainly would want to watch each game so that the ones I DO attend are part of a series of experiences, right?

 

The answer is that I’m not the customer. I’m the consumer. The customer is “the stadium” (the owners, the vendors, the advertisers). They are the ones putting up the money for the event, and they want to make their money back by ensuring sold-out crowds. The people who watch the game—whether in the stands or over the airwaves—are merely consumers.

 

In IT terms, the end-user is NOT the customer. They are the consumer. Management is the customer—the one footing the bill. If management says the entire company is moving to virtual desktops, it doesn’t matter whether the consumer wants, needs, or likes that decision.

 

So again, calling the folks who receive IT services a “customer” sets up a completely false set of expectations in the minds of everyone involved about how this relationship is going to play out.

 

However, there is another word that exists, within easy reach, that is far more accurate in describing the relationship, and also has the ability to create the behaviors we want when we (ill-advisedly) try to shoehorn “customer” into that spot. And that word is: “colleague.”

 

A colleague is someone I collaborate with. Maybe not on a day-to-day basis or in terms of my actual activities, but we work together to achieve the same goal (in the largest sense, whatever the goals of the business are). A colleague is someone I can’t “fire” or replace or solicit a bid from another provider about.

 

“Colleague” also creates the (very real) understanding that this relationship is long-term. Jane in the mailroom may become Jane in accounting, and later Jane the CFO. Through it all she remains my colleague. The relationship I build with her endures and my behavior toward her matters.

 

So, I’m going to remain stubbornly against using the word “customer” to refer to my colleagues. It de-values them and it de-values the relationship I want to have with them, and the one I hope they have with me.

Game tile spelling out "DATA"

Building a culture that favors protecting data can be challenging. In fact, most of us who love our data spend a huge amount of time standing up for our data when it seems everyone else wants to take the easiest route to getting stuff done. I can hear the pleas from here:

 

  • We don't have time to deal with SQL injection now. We will get to that later.
  • If we add encryption to this data, our queries will run longer. It will make the database larger, which will also affect performance. We can do that later if we get the performance issues fixed.
  • I don't want to keep typing our these long, complex passwords. They are painful.
  • Multi-factor authentication means I have to keep my phone near me. Plus, it's a pain.
  • Security is the job of the security team. They are a painful bunch of people.

 

…and so on. What my team members don't seem to understand is that these pain points are supposed to be painful. The locks on my house doors are painful. The keys to my car are painful. The PIN on my credit card is painful. All of these are set up, intentionally, as obstacles to access -- not my access, but unauthorized access. What is it about team members who lock their doors, shred sensitive documents, and keep their collector action figures under glass that don't want to protect the data we steward on behalf of customers? In my experience, these people don't want to protect data because they are measured, compensated, and punished in ways that take away almost all the incentives to do so. Developers and programmers are measured on the speed of delivery. DBAs are measured on uptime and performance. SysAdmins are measured on provisioning resources. And rarely have these roles been measured and rewarded for security and privacy compliance.

 

To Reward, We Must Measure

 

How do we fix this? We start rewarding people for data protection activities. To reward people, we need to measure their deliverables.

 

  • An enterprise-wide security policy and framework that includes specific measures at the data category level
  • Encryption design, starting with the data models
  • Data categorization and modeling
  • Test design that includes security and privacy testing
  • Proactive recognition of security requirements and techniques
  • Data profiling testing that discovers unprotected or under-protected data
  • Data security monitoring and alerting
  • Issue management and reporting

 

As for the rewards, they need to focus on the early introduction of data protection features and service. This includes reviewing designs and user stories for security requirements.

 

Then we get to the hard part: I'm of a thought that specific rewards for doing what was expected of me are over the top. But I recognize that this isn't always the best way to motivate positive actions. Besides, as I will get into later in this series, the organizational punishments for not protecting data may be so large that a company will not be able to afford the lack of data protection culture we currently have. Plus, we don't want to have to use a prison time measurement to encourage data protection.

 

In this series, I'll be discussing data protection actions, why they are important, and how we can be better at data. Until then, I'll love to hear about what, if any, data protection reward (or punishment) systems your organization has in place today.

I hope everyone had a wonderful holiday six-day weekend. The second half of the year has begun. There is still time to accomplish the goals you set at the start of the year.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

London police chief ‘completely comfortable’ using facial recognition with 98 percent error rate

It would seem that a reasonable person would understand that this technology isn’t ready, and that having a high number of mistakes leads to a lot of extra work by the police.

 

Why Won’t Millennials Join Country Clubs?

Because they are too busy paying down ridiculous student debt and mortgages?

 

Spiders Can Fly Hundreds of Miles Using Electricity

And they can crawl inside your ear when you sleep. Anyway, sweet dreams kids!

 

Manual Work is a Bug

A bit long but worth the time. Always be automating.

 

MoviePass is running out of money and needs to raise $1.2 billion

For $10 a month you can watch $300 worth of movies, which explains why MoviePass is bleeding cash right now. But hey, don’t let a good business model get in the way of that VC money.

 

If You Say Something Is “Likely,” How Likely Do People Think It Is?

I am certain that probably 60% of Actuator readers are likely to enjoy this article half the time.

 

US nickels cost seven cents to make. Scientists may have a solution

Sadly, the answer isn’t “get rid of nickels.” I’m fascinated about the downstream implications on this, and why our government should care that vending machines were built upon the assumption that coins would never change. Get rid of all coins, introduce machines that use cards and phones, and move into the 21st century, please.

 

How I spent my holiday weekend: building a fire pit, retaining wall, and spreading 3 cubic yards of pea stone. Who wants some scotch and s'mores?

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

For the public sector to maintain a suitable level of cybersecurity, the U.K. government has implemented some initiatives to guide organizations on how to do so effectively. In June 2017, the National Cyber Security Centre (NCSC) rolled out four measures as part of the Active Cyber Defence (ACD) program to assist government departments and arms-length public bodies in increasing their fundamental cybersecurity.

 

These four measures intend to make it more difficult for criminals to carry out attacks. They include blocking malicious web addresses from being accessed from government systems, blocking fake emails pretending to be the government, and helping public bodies fix security vulnerabilities on their website. The fourth measure relates to spotting and taking down phishing scams from the internet when the NCSC spots a site pretending to be a public-sector department or business.

 

Government IT professionals must incorporate strategies and solutions that make it easier for them to meet their compliance expectations. We suggest an approach on three fronts.

 

Step 1: Ensure network configurations are automated

 

One of the things departments should do to comply with the government’s security expectations is to monitor and manage their network configuration statuses. Automating network configuration management processes can make it much easier to help ensure compliance with key cybersecurity initiatives. Device configurations should be backed up and restored automatically, and alerts should be set up to advise administrators whenever an unauthorized change occurs.

 

Step 2: Make reporting a priority

 

Maintaining strong security involves prioritizing tracking and reporting. These reports should include details on configuration changes, policy compliance, security, and more. They should be easily readable, shareable, and exportable, and include all relevant details to show that they remain up-to-date with government standards.

 

Step 3: Automate patches and stamp out suspicious activity

 

IT administrators should also incorporate log and event management tools to strengthen their security postures. Like a watchdog, these solutions are designed to be on alert for suspicious activity, and can alert administrators or take actions when a potentially malicious threat is detected. This complements existing government safeguards like protected Domain Name System (DNS) and DMARC anti-spoofing.

 

Implementing automated patch management is another effective way to help make sure that network technologies remain available, secure, and up-to-date. Government departments must stay on top of their patch management to combat threats and help maintain strong security. The best way to do this is to manage patches from a centralized dashboard.

 

Keeping up with the guidelines proposed in initiatives such as the ACD program can be a tricky and complicated process, but it doesn’t have to be that way. By integrating these simple but effective steps, government IT professionals are better positioned to efficiently follow the guidelines and up their security game, protecting not just themselves, but the government’s reputation.

 

Find the full article on Central Government.

Happy 4th of July! Holiday or not, the Actuator always delivers. I do hope you are taking the time to spend with family and friends today. You can come back and read this post later, I won’t mind.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Debugging Serverless Apps: from monitoring invocations to observing a system of functions

As our systems become more complex, it becomes more important than ever to start labeling everything we can. Metadata will become your most important data asset.

 

4 Types of Idle Cloud Resources That Are Wasting Your Money

Speaking of containers, they are likely a vampire resource in your cloud environment along with a handful of other allocated resources which are lightly used.

 

Dealing with the insider threat on your network

Buried in this article is this gem: “…security is not so much about monitoring the perimeter anymore; companies need to be looking on the inside - how communications are happening on the network, how systems are talking to each other and most importantly what are the users doing on the network.” This is why anomaly detection, built on top of machine learning algorithms, are the next generation of tools to defend against threats.

 

LA Fitness, ‘Hotel California’ and the fallacy of digital transformation

The author uses LA Fitness as one example, but I know of dozens more. This scenario is very common, where a company chooses to modernize only parts of their business. Usually, the part chosen is one that generates revenue, and not with customer service.

 

Apple is rebuilding Maps from the ground up

Two interesting parts to this story. The first is the admission that Apple knew their Maps feature was going to be poor right from the start, but they knew they needed to launch something. Second, the way they are making an effort to collect data and respect user privacy at the same time.

 

Here's how Amazon is able to poach so many execs from Microsoft

The answer combines a dollar sign in front and lots of numbers after.

 

About 300K expected to visit Las Vegas for July 4th

With July 4th on a Wednesday, more and more people are thinking "WOOHOO, SIX DAY WEEKEND!"

 

Happy Independence Day! Here's a picture of me riding an eagle:

 

Back from Germany and my first ever SQL Grillen, the event that combines two of my favorite things: SQL Server and meat. I even had the opportunity to drive a BMW on the Autobahn. I hope to make it back for next year, as the event expands to two days of pork-belly goodness.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Amazon Prime benefits roll out to all Whole Foods stores

This sounds great except the nearest Whole Foods to me is over 30 miles away. The cost of gasoline would offset the meager discount pricing being offered. Not to mention the money lost due to the amount of time getting there and back. Maybe if Amazon offered me a discount on grocery delivery this might make sense. But for me and others that don't live near a Whole Foods, this isn't that much of a benefit to being a Prime member.

 

Tesla sues ex-employee for hacking and theft. But he says he's a whistleblower

This seems like it is going to get ugly, and fast. Musk has been having trouble meeting his numbers with Tesla. It's easy for him to make an effort to blame his problems on sabotage. It's also easy for an employee to step forward and present data in a way that shows Musk has been the reason why Tesla doesn't meet their numbers. I'm expecting this case to drag on for a while, and in the end, everyone loses, except Musk because he will still have billions of dollars.

 

Supreme Court Rules on Mobile Location Data: Get a Warrant

This seems like common sense, and something long overdue. I'm all for enforcing laws, but I'm also a big advocate for privacy. The police still have the ability to access the data, but must provide proof that access to the data is needed.

 

240 minutes a day separates the rich from everyone else

I'm not including this because I think y'all should be millionaires. No, I'm including this because I like how it breaks down those 240 minutes into specific activities. I challenge you to pick one (education, exercise, goal-setting, relationship-setting) and incorporate it into your daily routine as well.

 

The Machine Fired Me

The dark side to automation: the most efficient process is the termination process.

 

Apple to Unveil High-End AirPods, Over-Ear Headphones For 2019

Because the current price of $159 wasn't enough money, apparently.

 

Facebook will tell you how much time you’re wasting in its app

NARRATOR VOICE: All time on Facebook is wasted.

 

Just a quick picture to show you the glory of SQL Grillen last week:

 

So you’ve made it through a disaster recovery event. Well done! Whether it was a simulation or an actual recovery, it was possibly a tense and trying time for you and your operations team. The business is no doubt happy to know that everything is back up and running from an infrastructure perspective, and they’re likely scrambling to test their applications to make sure everything’s come up intact.

 

How do you know that your infrastructure is back up and running though? There are probably a lot more green lights flashing in your DC than there are amber or red ones. That’s a good sign. And you probably have visibility into monitoring systems for the various infrastructure elements that go together to keep your business running. You might even be feeling pretty good about getting everything back without any major problems.

 

But would you know if anything did go wrong? Some of your applications might not be working. Or worse, they could be working, but with corrupt or stale data. Your business users are hopefully going to know if something’s up, but that’s going to take time. It could be something as simple as a server that’s come up with the wrong disk attached, or a snapshot that’s no longer accessible.

 

Syslog provides a snapshot of what happened during the recovery, in much the same way as you can use it as a tool to validate your recovery process when you’re actually in the midst of a recovery. When all of those machines come back on after a DC power failure, for example, there’s going to be a bunch of messages sent to your (hopefully centralized) syslog targets. In this way, you can go back through the logs to ensure that hosts have come back in the correct order. More importantly, if an application is still broken after the first pass at recovery, syslog can help you pinpoint where the problem may be. Rather than manually checking every piece of infrastructure and code that comprises the application stack, you can narrow your focus and, hopefully, resolve the problem in a quicker fashion than if you were just told “there’s something wrong with our key customer database.”

 

Beyond troubleshooting though, I think syslog is a great tool to use when you need to provide some kind of proof back to the business that their application is either functional or having problems because of an issue outside of the infrastructure layer. You’ve likely heard someone say that “it’s always the network,” when it’s likely nothing to do with the network. But proving that to an unhappy end-user or developer who’s got a key application that isn’t working anymore can be tricky. Having logs available to show them will at least give them some comfort that the problem isn’t with the network, or the storage, or whatever.

 

Syslog also gives you a way of validating your recovery process, and providing the business, or the operations manager, evidence that you’ve done the right thing during the recovery process. It’s not about burying the business with thousands of pages of logs, but rather demonstrating that you know what you’re doing, you have a handle on what’s happening at any given time, and you can pinpoint issues quickly. Next time there is an issue, the business is going to have a lot more confidence in your ability to resolve the crisis. This can only be a good thing when it comes to improving the relationship between business users and their IT teams.

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Today, with the proliferation of the Internet of Things (IoT), thousands of devices are now connected to government networks, many unwittingly. Research firm Gartner predicts that more than 20 billion connected “things” will be in use worldwide by 2020—nearly three times the number in use today.

 

In a recent Federal Cybersecurity Survey, federal IT decision makers weighed in on the growing importance of managing the often invisible threats promulgated by IoT. Respondents identified an increased attack surface as the greatest security challenge facing their agencies as IoT continues to evolve. The second-greatest security threat, according to those surveyed, is the inconsistency of security on connected devices. The majority surveyed agreed that some enhancements were needed to better discover, manage, and secure IoT devices.

 

How do federal IT pros put those enhancements in place to more effectively manage IoT devices? Three steps will start the process.

 

Step 1: Understanding

 

The first step to enhancing IoT security is information, or gaining an understanding of what’s out there. In an IoT world, there are a dramatic number of devices that may be connected.

 

The best way to get a handle on connected devices is to use a set of comprehensive network monitoring tools; this will help itemize everything currently connected to the network. Consider using tools that also provide a view into who is connected, when they connected, and where they are connected.

 

Taking that even further, some tools offer an overview of which ports are in use and which are not. This information helps the federal IT pro keep unused ports closed against potential security threats and avoid covertly added devices.

 

Also, consider creating a list of approved devices for the network that will help the security team more easily and quickly identify when something out of the ordinary happens, as well as surface any existing unknown devices the team may need to disconnect immediately. The best way to profile devices is to implement a security policy that only allows approved vendors or devices.

 

Step 2: Network monitoring plus

 

Beyond network monitoring, it is equally important to understand what those devices are doing relative to what they’re supposed to be doing. For example, if a network administrator sees that a network printer is not acting like a printer—but, instead acting like a far more complex information-sharing node—that is a dramatic red flag. We’re far beyond the point of device identification. We also need to focus on device behavior.

 

A function of monitoring device activity should include a process to ensure that the only devices hitting the networks are those that are deemed secure. The federal IT pro will want to track and monitor all connected devices by MAC and IP address, as well as access points. Set up user and device watch lists to help detect rogue users and devices to maintain control over who and what is using the network.

 

Step 3: Update, update, update

 

As pinpointed in the Federal Cybersecurity Survey, one of the greatest concerns for federal IT pros is consistent—or lack thereof—security on IoT devices. Here’s why: IoT devices are generally simple, cheap, and low-powered. These devices often do not have built-in security, and certainly do not have the ability to run the antivirus programs that are operated by traditional computers.

 

The best way to stay ahead of the IoT explosion? Stay on top of security patches. Be aware of the patch release schedule for the vendors that make up or are in your environment.

 

The IoT is here to stay, and the number and types of devices that will connect to the network will continue to increase. There may not be a single, simple way to manage and secure the IoT, but following the above three steps will certainly be a solid start. And start quickly.  At this rate of expansion, the sooner the better.

 

Find the full article on our partner DLT’s blog Technically Speaking.

Patrick Hubbard

THWACKcamp 2018

Posted by Patrick Hubbard Employee Jun 25, 2018

Something is happening in IT. OK, something is always happening in IT, but this year the challenge of IT is measurably different. 2018 IT Trends Index data suggests that there’s a little more disconnect than usual between the technologies CIOs are excited about and the tools we rely on as operations experts. And that means not only is this year’s SolarWinds THWACKcamp a bit different, but that you’ll want to be there more than ever. Save this date on your calendar- THWACKcamp 2018 registration opens August 1.

 

2018 is the seventh year for THWACKcamp, and once again we’ll be live October 17-18 with packed session tracks covering everything from network monitoring and management, to change control, application management, storage, cloud and DevOps, security, automation, virtualization, mapping, logging, and more. Once again, you’ll want to register early to join thousands of skilled admins who attend online every year. And this year at THWACKcamp, we’re talking about the increasingly blurred lines between data center, cloud, and applications.

 

The Sage Voice of Operations

 

For as long as I can remember in IT, we admins are called on from time to time to talk IT executives off the ledge of the Latest Shiny Thing. This year it’s AI, machine learning, and IoT/robotics that will “change the world” in 2018. But you and I know that as increasingly interesting as these tools are, we’re still spending most of our time with the mundane, like untangling Cisco Nexus ACLs on one of 2000 VPC interfaces. Many are still trapped by the help desk- generally in reactionary mode and not making proactive enhancements that help the business. Conversations with you, as well as survey responses and THWACK community thread data, tell us this year you want hands-on, how-to sessions more than ever.

 

For 2018 we’ll have SolarWinds experts across all sessions, live segments, and of course on chat, ready to answer your most challenging questions. This year at SWUGs and during SolarWinds Lab events, customers aren’t saying that they’re transforming into cloud and/or DevOps-centric organizations, but rather, IT in all its traditional, under-staffed, over-allocated glory, is adopting new technology that was expected to go to other specialist teams.

 

For example, in most companies, new tech like containers isn’t staying in DevOps as foretold. Instead, existing, dependable operations teams learn what they need to know, and tuck containers alongside everything else they manage. As admins, we’re becoming less siloed DevOps, on-premises IT, or managed service experts, and more overall, versatile technology professionals. We’re beginning to use shared tools that solve more than one problem. We’re app teams learning about distributed tracing, network teams learning about Kubernetes multiplexing, or security teams grappling with industrial IoT firewall rules, all while we keep our existing environments running. We believe 2018 will include some of the most interesting THWACK community conversations, ever.

 

I’m Just Here for the Giveaways

 

While of course the main reason you should attend THWACKcamp is to gain skills that make you a better professional, THWACKcamp wouldn’t be THWACKcamp without plenty of opportunities to pick up some great swag. We’ll have great geek giveaways, present community awards, and have plenty of other activities to making shopping the THWACK store even more fun. Once again, you’ll have an opportunity to earn up to 20,000 THWACK points just by registering early, attending sessions live, and completing session surveys. As always to attend sessions, win prizes, grab THWACK points, or chat, you will need to register for this unique, live event.

 

We look forward to seeing you again at THWACKcamp, so please check the THWACKcamp homepage beginning in August to register early (https://thwackcamp.com). And of course, while you’re there, feel free to watch sessions from previous years- the catalog has really grown. THWACK and SolarWinds are driven by you, and it’s always an honor to share the energy of so many of you all in one place online. THWACKcamp 2018 is going to be the best one yet.

In my last post, I talked about microservices and how their deep connection can offer a quality application.

 

Now, I want to move up a layer. Not only between microservices, but interaction between applications. I would call it architecture applied to applications, where their behavior is strictly connected and influences the other ones. Like it is for microservices, the goal is a better user experience.

 

Architecting a Good Product

The architecture is built to scale up and manage all the single pieces overall. Every application should play like an instrument in an orchestra. The applications need to follow rules, strategies, and logical and technical design. If one of them doesn’t play as expected, the whole concert becomes a low-quality performance. If direction is improvised, the applications won’t run in harmony. The entire user experience relies completely on the whole architecture.

 

The importance of a good design describes all the interactions between every application in detail as the minimum requirement for an acceptable user experience. This so-called UX (User Experience) design is the goal of the final product.

 

Different Users, Different Designs

Happiness is extremely subjective. Consequently, the UX is also very subjective. We should consider in this design how different users react based on age, skills, expectations, and so on. There may be several different designs with different interactions and even different applications involved. For each of the users, the product should be easily usable in a reasonable time. The analysis of user behavior will help in this. The best way to accomplish this is offering a limited number of users the final tool as a beta and receive a feedback to build a model of interactions between the components in the UX.

 

Who Wants to be a Pilot?

Complexity of a GUI doesn’t necessarily mean a cryptic interface. To drive a sports car, you need a good understanding of the theory behind driving and practical experience. But sport cars aren’t built just for car enthusiasts, so they need to have a sophisticated electronic layer to manage all the critical behavior and correct all the possible mistakes that a “normal” driver could make. This complex environment should be simple and intuitive for the user to help them focus on driving. So, with a glance, the driver should be able to use the navigation system, enable the cruise control, activate fog lights, and so on. All of these components are the applications in a UX architecture.

 

Is it Nice? Not Enough

Aesthetics can’t replace usability. You can spend all of your development time making the best icons and all the best possible nice animations. However, if UX is bad, the whole system won’t have success. Instead, the opposite is true. If applications are well built and their interaction makes the end-user happy, even a less-polished look will be acceptable if the product ends up being successful.

 

Standardize It

The Interaction Design Association (IxDA, https://ixda.org/) defines the interaction design between users and services. It also defines the interactions inside services and between applications. IxDA offers some guidelines to follow in order for a product to be standardized. This means for users that a product defined by IxDA standards is inherently usable because all similar products run in the same way.

 

And again – feedback, feedback, feedback! Not only in beta testing, but also when the UX is generally available. The more info you get from the users, the better you can tune applications’ interactions, and the better the overall user experience.

I hope everyone had an enjoyable Father’s Day this past weekend. For me, Father’s Day is the official start to summer. It’s also an excuse to grill as much meat as legally possible. By the time you read this, I will be on my way to Germany, to eat my weight in German grilled meat at SQL Grillen.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Apple Update Will Hamper Police Device Crackers

This update is to close a loophole exploited by companies such as Grayshift, that specialize in helping police departments crack open a locked phone. But by announcing the upcoming patch, Apple has given Grayshift plenty of notice to come up with alternatives. Look for this dance to continue for some time.

 

GPAs don’t really show what students learned. Here’s why.

This post was written by someone that took one class in statistics and had a 2.8 GPA. All kidding aside, I do like the idea of modifying, or eliminating, the use of GPA as a measuring stick.

 

Inside Amazon’s $3.5 Million Competition to Make Alexa Chat Like a Human

A 20-minute conversation with a robot sounds amazing and awful at the same time.

 

Blockchain explained

A visual representation to help you understand that Blockchain is a linked list with horrible latency and performance.

 

Unbreakable smart lock devastated to discover screwdrivers exist

I have no words.

 

The Death of Supply Chain Management

You can swap out the subject of this article for almost any other and the theme is the same: humans of today need to be prepared for the machines of tomorrow that will be taking their jobs.

 

Ancient Earth globe shows where you were located 750 million years ago

Because I’m a geek who loves things like this and I think you should, too.

 

TFW you go to get a cup of coffee and find your Jeep twin:

 

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.