Geek Speak Blogs

cancel
Showing results for 
Search instead for 
Did you mean: 
Create Post

Geek Speak Blogs

Level 12

Here’s an interesting article by my colleague Brandon Shopp, who offers tips for monitoring and backing up cloud-based applications like Office 365.

Read more...

Read more
1 1 56

As part of a series of Blog posts helping organisations focus on the important stuff during this period of Corona Virus crisis, this post discusses how bringing focus to your monitoring to the important metrics is essential. Having visibility of the normal operational baselines will give you access to understand what, how, where and why these metrics can impact the change in how your IT infrastructure and the services they deliver are potentially able and actually coping.

With countries on lock down, businesses telling staff to work from home, the capability of IT to support such a dramatic shift in how the services are consumed by users. I hope this article helps and contributes to you gaining better understanding and knowledge on maintaining an operational business and organisation during this dramatic time.

Read more...

Read more
8 8 548
Level 12

Here’s an interesting article by my colleague Brandon Shopp reviewing containers, their monitoring challenges, and suggestions on tools to manage them effectively.

 

Read more...

Read more
2 7 257
Level 17

This edition of the Actuator comes to you from my kitchen, where I'm enjoying some time at home before I hit the road. I'll be at RSA next week, then Darmstadt, Germany the following week. And then I head to Seattle for the Microsoft MVP Summit. This is all my way of saying future editions of the Actuator may be delayed. I'll do my best, but I hope you understand.

As always, here's a bunch of links I hope you will find useful. Enjoy!

It doesn’t matter if China hacked Equifax

No, it doesn't, because the evidence suggests China was but one of many entities that helped themselves to the data Equifax was negligent in guarding.

Data centers generate the same amount of carbon emissions as global airlines

Machine learning, and bitcoin mining, are large users of power in any data center. This is why Microsoft has announced they'll look to be carbon neutral as soon as possible.

Delta hopes to be the first carbon neutral airline

On the heels of Microsoft's announcement, seeing this from Delta gives me hope many other companies will take action, and not issue press releases only.

Apple’s Mac computers now outpace Windows in malware and virus

Nothing is secure. Stay safe out there.

Over 500 Chrome Extensions Secretly Uploaded Private Data

Everything is terrible.

Judge temporarily halts work on JEDI contract until court can hear AWS protest

This is going to get ugly to watch. You stay right there, I'll go grab the popcorn.

How to Add “Move to” or “Copy to” to Windows 10’s Context Menu

I didn't know I needed this until now, and now I'm left wondering how I've lived so long without this in my life.

Our new Sunday morning ritual is walking through Forest Park. Each week we seem to find something new to enjoy.

048F2408-CFE5-4464-8C7A-842A9FFC1832.GIF

Read more
3 27 885
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by Brandon Shopp about DoD’s not-so-secret weapon against cyberthreats. DISA has created technical guidelines that evolve to help keep ahead of threats, and this blog helps demystify DISA STIGs.

The Defense Information Systems Agency (DISA) has a set of security regulations to provide a baseline standard for Department of Defense (DoD) networks, systems, and applications. DISA enforces hundreds of pages of detailed rules IT pros must follow to properly secure or “harden” the government computer infrastructure and systems.

If you’re responsible for a DoD network, these STIGs (Security Technical Implementation Guides) help guide your network management, configuration, and monitoring strategies across access control, operating systems, applications, network devices, and even physical security. DISA releases new STIGs at least once every quarter. This aggressive release schedule is designed to catch as many recently patched vulnerabilities as possible and ensure a secure baseline for the component in operation.

How can a federal IT pro get compliant when so many requirements must be met on a regular basis? The answer is automation.

First, let’s revisit STIG basics. The DoD developed STIGs, or hardening guidelines, for the most common components comprising agency systems. As of this writing, there are nearly 600 STIGs, each of which may comprise hundreds of security checks specific to the component being hardened.

A second challenge, in addition to the cost of meeting STIG requirements, is the number of requirements needing to be met. Agency systems may be made up of many components, each requiring STIG compliance. Remember, there are nearly 600 different versions of STIGs, some unique to a component, some targeting specific release versions of the component.

Wouldn’t it be great if automation could step in and solve the cost challenge while saving time by building repeatable processes? That’s precisely what automation does.

  • Automated tools for Windows servers let you test STIG compliance on a single instance, test all changes until approved, then push out those changes to other Windows servers via Group Policy Object (GPO) automation. Automated tools for Linux permit a similar outcome: test all changes due to STIG compliance and then push all approved changes as a tested, secure baseline out to other servers
  • Automated network monitoring tools digest system logs in real time, create alerts based on predefined rules, and help meet STIG requirements for Continuous Monitoring (CM) security controls while providing the defense team with actionable response guidance
  • Automated device configuration tools can continuously monitor device configurations for setting changes across geographically dispersed networks, enforcing compliance with security policies, and making configuration backups useful in system restoration efforts after an outage
  • Automation also addresses readability. STIGs are released in XML format—not the most human-readable form for delivering data. Some newer automated STIG compliance tools generate easy-to-read compliance reports useful for both security management and technical support teams

If you’re a federal IT pro within a DoD agency, you have an increasing number of requirements to satisfy. Let automation take some of the heavy lifting when it comes to compliance, so you and your team can focus on more pressing tasks.

Find the full article on Government Technology Insider.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 12 548
Level 17

This week's Actuator comes to you from Austin, as I'm in town to host SolarWinds Lab live. We'll be talking about Database Performance Monitor (nee VividCortex). I hope you find time to watch and bring questions!

As always, here's a bunch of links I hope you find useful. Enjoy!

First clinical trial of gene editing to help target cancer

Being close to the biotech industry in and around Boston, I heard rumors of these treatments two years ago. I'm hopeful our doctors can get this done, and soon.

What Happened With DNC Tech

Twitter thread about the tech failure in Iowa last week.

Analysis of compensation, level, and experience details of 19K tech workers

Wonderful data analysis on salary information. Start at the bottom with the conclusions, then decide for yourself if you want to dive into the details above.

Things I Believe About Software Engineering

There's some deep thoughts in this brief post. Take time to reflect on them.

Smart Streetlights Are Experiencing Mission Creep

Nice reminder that surveillance is happening all around us, in ways you may never know.

11 Reasons Not to Become Famous (or “A Few Lessons Learned Since 2007”)

A bit long, but worth the time. I've never been a fan of Tim or his book, but this post struck a chord.

Berlin artist uses 99 phones to trick Google into traffic jam alert

Is it wrong that I want to try this now?

I think I understand why they never tell me anything around here...

Read more
1 17 635
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Mav Turner with ideas about how the government could monitor and automate their hyperconverged infrastructure to help achieve their modernization objectives.

It’s no surprise hyperconverged infrastructure (HCI) has been embraced by a growing number of government IT managers, since HCI merges storage, compute, and networking into a much smaller and more manageable footprint.

As with any new technology, however, HCI’s wide adoption can be slowed by skeptics, such as IT administrators concerned with interoperability and implementation costs. However, HCI doesn’t mean starting from scratch. Indeed, migration is best achieved gradually, with agencies buying only what they need, when they need it, as part of a long-term IT modernization plan.

Let’s take a closer look at the benefits HCI provides to government agencies, then examine key considerations when it comes to implementing the technology—namely, the importance of automation and infrastructure monitoring.

Combining the Best of Physical and Virtual Worlds

Enthusiasts like HCI giving them the performance, reliability, and availability of an on-premises data center along with the ability to scale IT in the cloud. This flexibility allows them to easily incorporate new technologies and architectures into the infrastructure. HCI also consolidates previously disparate compute, networking, and storage functions into a single, compact data center.

Extracting Value Through Monitoring and Automation

Agencies are familiar with monitoring storage, network, and compute as separate entities; when these functions are combined with HCI, network monitoring is still required. Indeed, having complete IT visibility becomes more important as infrastructure converges.

Combining different services into one is a highly complex task fraught with risk. Things change rapidly, and errors can easily occur. Managers need clear insight into what’s going on with their systems.

After the initial deployment is complete monitoring should continue unabated. It’s vital for IT managers to understand the impact apps, services, and integrated components have on each other and the legacy infrastructure around them.

Additionally, all these processes should be fully automated. Autonomous workload acceleration is a core HCI benefit. Automation binds HCI components together, making them easier to manage and maintain—which in turn yields a more efficient data center. If agencies don’t spend time automating the monitoring of their HCI, they’ll run the risk of allocating resources or building out capacity they don’t need and may expose organizational data to additional security threats.

Investing in the Right Technical Skills

HCI requires a unique skillset. It’s important they invest in technical staff with practical HCI experience and the knowledge to effectively implement infrastructure monitoring and automation capabilities. These experts will be critical in helping agencies take advantage of the vast potential this technology has to offer.

Reaping the Rewards of HCI

Incorporating infrastructure monitoring and automation into HCI implementation plans will enable agencies to reap the full rewards: lower total cost of IT ownership thanks to simplified data center architecture, consistent and predictable performance, faster application delivery, improved agility, IT service levels accurately matched to capacity requirements, and more.

There’s a lot of return for simply applying the same level of care and attention to monitoring HCI as traditional infrastructure.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 5 498
Level 17

This week's Actuator comes to you from New England where it has been 367 days since our team last appeared in a Super Bowl. I'm still not ready to talk about it, though.

As always, here's a bunch of links I hope you find interesting. Enjoy!

97% of airports showing signs of weak cybersecurity

I would have put the number closer to 99%.

Skimming heist that hit convenience chain may have compromised 30 million cards

Looks like airports aren't the only industry with security issues.

It’s 2020 and we still have a data privacy problem

SPOILER ALERT: We will always have a data privacy problem.

Don’t be fooled: Blockchains are not miracle security solutions

No, you don't need a blockchain.

Google’s tenth messaging service will “unify” Gmail, Drive, Hangouts Chat

Tenth time is the charm, right? I'm certain this one will be the killer messaging app they have been looking for. And there's no way once it gets popular they'll kill it, either.

A Vermont bill would bring emoji license plates to the US

Just like candy corn, here's something else no one wants.

For the game this year I made some pork belly bites in a garlic honey soy sauce.

pastedImage_6.png

Read more
0 20 803
Level 17

This week's Actuator comes to you from the suddenly mild January here in the Northeast. I'm taking advantage of the warm and dry days up here, spending time walking outdoors. Being outdoors is far better than the treadmill at the gym.

As always, here's a bunch of links from the internet I hope you will find useful. Enjoy!

Jeff Bezos hack: Amazon boss's phone 'hacked by Saudi crown prince'

I don't know where to begin. Maybe we can start with the idea that Bezos uses WhatsApp, an app known to be unsecured and owned by the unsecured Facebook. I'm starting to think he built a trillion-dollar company by accident, not because he's smart.

New Ransomware Process Leverages Native Windows Features

This is notable, but not new. Ransomware often uses resources available on the machine to do damage. For example, VB macros embedded in spreadsheets. I don't blame Microsoft for saying they won't provide security service for this, but it would be nice if they could hint at finding ways to identify and halt malicious activity.

London facial recognition: Metropolitan police announces new deployment of cameras

Last week the EU was talking about a five-year ban on facial recognition technology. Naturally, the U.K. decides to double down on their use of that same tech. I can't help but draw the conclusion this shows the deep divide between the U.K. and the EU.

Security Is an Availability Problem

I'm not certain, but I suspect many business decision-makers tend to think "that can't happen to us," and thus fail to plan for the day when it does happen to them.

Apple's dedication to 'a diversity of dongles' is polluting the planet

Words will never express my frustration with Apple for the "innovation" of removing a headphone jack and forcing me to buy additional hardware to continue to use my existing accessories.

Webex flaw allowed anyone to join private online meetings - no password required

The last thing I'm doing during the day is trying to join *more* meetings.

Play Dungeons & Deadlines

You might want to set aside some time for this one.

Walking through Forest Park this past Sunday, after a rainstorm the day before and the temperature so perfect to catch the steam coming off the trees.

pastedImage_0.png

Read more
2 34 1,159
Level 17

Back from Austin and home for a few weeks before I head...back to Austin for a live episode of SolarWinds Lab. Last week was the annual Head Geeks Summit, and it was good to be sequestered for a few days with just our team as we map out our plans for world domination in 2020 (or 2021, whatever it takes).

As always, here's a bunch of stuff I found on the internetz this week that I think you might enjoy. Cheers!

Critical Windows 10 vulnerability used to Rickroll the NSA and Github

Patch your stuff, folks. Don't wait, get it done.

WeLeakInfo, the site which sold access to passwords stolen in data breaches, is brought down by the ...

In case you were wondering, the website was allowed to exist for three years before it was finally shut down. No idea what took so long, but I tip my hat to the owners. They didn't steal anything, they just took available data and made it easy to consume. Still, they must have known they were in murky legal waters.

Facial recognition: EU considers ban of up to five years

I can't say if that's the right amount of time; I'd prefer they ban it outright for now. This isn't just a matter of the tech being reliable, it brings about questions regarding basic privacy versus a surveillance state.

Biden wants Sec. 230 gone, calls tech “totally irresponsible,” “little creeps”

Politics aside, I agree with the idea that a website publisher should bear some burden regarding the content allowed. Similar to how I feel developers should be held accountable for deploying software that's not secure, or leaving S3 buckets wide open. Until individuals understand the risks, we will continue to have a mess of things on our hands.

Microsoft pledges to be 'carbon negative' by 2030

This is a lofty goal, and I applaud the effort here by Microsoft to erase their entire carbon footprint since they were founded in 1975. It will be interesting to see if any other companies try to follow, but I suspect some (*cough* Apple) won't even bother.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

In today's edition of "do as I say, not as I do", Google reminds us that their new motto is "Only slightly evil."

Technical Debt Is like a Tetris Game

I like this analogy, and thought you might like it as well. Let me know if it helps you.

If you are ever in Kansas City, run, don't walk, to Jack Stack and order the beef rib appetizer. You're welcome.

7_18_13 - 1.jpg

Read more
2 21 666
Level 10

2019 was busy year for DevOps as measured by the events held on the topic. Whether it be DevOps days around the globe, DockerCon, DevOps Enterprise Summits, KubeCon, or CloudNativeCon, events are springing up to support this growing community. With a huge number of events already scheduled for 2020, people plan on improving their skills with this technology. This is great—it’ll allow DevOps leaders to close capability gaps and it should be a must for those on a DevOps journey in 2020.

Hopefully, we’ll see more organizations adopt the key stages of DevOps evolution (foundation building, normalization, standardization, expansion, automated infrastructure delivery, and self-service) by following this model. Understanding where you are on the journey helps you plan what needs to be satisfied at each level before trying to move on to an area of greater complexity. By looking at the levels of integration and the growing tool chain, we can see where you are and plan accordingly. I look forward to seeing and reading about the trials and how they were overcome by organizations looking to further their DevOps movement in 2020.

You’ll probably hear terms like NoOps and DevSecOps gain more traction over the coming year from certain analysts. I believe the name DevOps is currently fine for what you’re trying to achieve. If you follow correct procedures, then security and operations already make up a large subset of your workflows. Therefore, you shouldn’t need to call them out in separate terms. If you’re not pushing changes to live systems, then you aren’t really doing any operations, and therefore not truly testing your code. So how can you go back and improve or reiterate on it? As for security, while it’s hard to implement correctly and as difficult to work collaboratively together, there’s a greater need to adopt this technology correctly. Organizations that have matured and evolved through the stages above are far more likely to place emphasis on the integration of security than those just starting out. Improved security posture will be a key talking point as we progress through 2020 and into the next decade.

Kubernetes will gain even more ground in 2020 as more people look to a way to provide a robust method of container orchestration to scale, monitor, and run any application, with many big-name software vendors investing in what they see as the “next battleground” for variants on the open-source application management tool.

Organizations will start to invest in more use of artificial intelligence, whether it be for automation, remediation, or improved testing. You can’t deny artificial intelligence and machine learning are hot right now and will seep into this aspect of technology in 2020. The best place to be to try this is in a cloud provider, saving you the need to invest in hardware. Instead, the provider can get you up and running in minutes.

Microservices and containers infrastructure will be another area of growth within the coming 12 months. Container registries are beneficial to organizations. Registries allow companies to apply policies, whether a security or access control policy and more, to how they manage containers. JFrog Container Registry is probably going to lead the charge in 2020, but don’t think they’ll have it easy as AWS, Google, Azure, and other software vendors have products fighting for this space.

These are just a few areas I see will be topics of conversation and column inches as we move into 2020 and beyond, but it tells me this is the area to develop your skills if you want to be in demand as we move into the second decade of this century.

Read more
1 9 492
Level 17

In Austin this week for our annual meeting of Head Geeks. The first order of business is to decide what to call our group. I prefer a "gigabyte of Geeks," but I continue to be outvoted. Your suggestions are welcome.

As always, here's a bunch of links from the internet I hope you find interesting. Enjoy!

Facebook again refuses to ban political ads, even false ones

Zuckerberg continues to show the world he only cares about ad revenue, for without that revenue stream his company would collapse.

Scooter Startup Lime Exits 12 Cities and Lays Off Workers in Profit Push

Are you saying renting scooters your customers then abandon across cities *is not* a profitable business model? That's crazy!

Russian journals retract more than 800 papers after ‘bombshell’ investigation

I wish we could do the same thing with blog posts, old and new.

Alleged head of $3.5M crypto mining scam bought stake in nightclub

A cryptocurrency scam? Say it isn't so! Who knew this was even possible?

Ring confirms it fired four employees for watching customer videos

Ah, but only after an external complaint, and *after* their actions were known internally. In other words, these four would still have jobs if not for the external probe.

Tesla driver arrested for flossing at 84 mph on autopilot

Don't judge, we've all been there, stuck in our car and in need of flossing our teeth.

It's helpful for a restaurant to publish their menu outside for everyone to see.

IEBYE5932.JPG

Read more
2 36 943
Level 10

There are many configuration management, deployment, and orchestration tools available, ranging from open-source tools to automation engines. Ansible is one such software stack available to cover all the bases, and seems to be gaining more traction by the day. In this post, we’ll look at how this simple but powerful tool can change your software deployments by bringing consistency and reliability to your environment.

Ansible gives you the ability to provision, control, configure, and deploy applications to multiple servers from a single machine. Ansible allows for successful repetition of tasks, can scale from one to 10,000 or more endpoints, and uses YAML to apply configuration changes, which is easy to read and understand. It’s lightweight, uses SSH PowerShell and APIs for access, and as mentioned above, is an open-source project. It’s also agentless, differentiating it from some other similar competitive tools in this marketplace. Ansible is designed with your whole infrastructure in mind rather than individual servers. If you need dashboard monitoring, then Ansible Tower is for you.

Once installed on a master server, you create an inventory of machines or nodes for it to perform tasks on. You can then start to push configuration changes to nodes. An Ansible playbook is a collection of tasks you want to be executed on a remote server, in a configuration file. Get complicated with playbooks from the simple management and configuration of remote machines all the way to a multifaceted deployment with these five tips to start getting the most out of what tool can deliver.

  1. Passwordless keys (for SSH) is the way to go. Probably one you should undertake from day one. Not just for Ansible, this uses a public shared key between hosts based on the v2 standard with most default OSs creating 2048-bit keys, but can be changed in certain situations up to 4096-bit. No longer do you have to type in long complex passwords for every login session—this more reliable and easier-to-maintain method makes your environment both more secure and easier for Ansible to execute.
  2. Use check mode to dry run most modules. If you’re not sure how a new playbook or update will perform, dry runs are for you. With configuration management and Ansible’s ability to provide you with desired state and your end goal, you can use dry run mode to preview what changes are going to be applied to the system in question. Simply add the --check command to the ansible-playbook command for a glance at what will happen.
  3. Use Ansible roles. This is where you break a playbook out into multiple files. This file structure consists of a grouping of files, tasks, and variables, which now moves you to modularization of your code and thus independent adaptation upgrade, and allows for reuse of configuration steps, making changes and improvements to your Ansible configurations easier.
  4. Ansible Galaxy is where you should start any new project. Access to roles, playbooks, and modules from community and vendors—why reinvent the wheel? Galaxy is a free site for searching, rating, downloading, and even reviewing community-developed Ansible roles. This is a great way to get a helping hand with your automation projects.
  5. Use a third-party vault software. Ansible Vault is functional, but a single shared secret makes it hard to audit or control who has access to all the nodes in your environment. Look for something with a centrally managed repository of secrets you can audit and lock down in a security breach scenario. I suggest HashiCorp Vault as it can meet all these demands and more, but others are available.

Hopefully you now have a desire to either start using Ansible and reduce time wasted on rinse and repeat configuration tasks, or you’ve picked up a few tips to take your skills to the next level and continue your DevOps journey.

Read more
3 14 1,068
Level 17

Welcome back! I hope y'all had a happy and healthy holiday break. I'm back in the saddle after hosting a wonderful Christmas dinner for 20 friends and family. I had some time off as well, which I used to work a bit on my blog as well as some Python and data science learning.

As usual, here's a bunch of links from the internet I hope you'll find useful. Enjoy!

Team that made gene-edited babies sentenced to prison, fined

I wasn't aware we had reached the point of altering babies' DNA, but here we are.

2019 Data Breach Hall of Shame: These were the biggest data breaches of the year

I expect a longer list from 2020.

Bing’s Top Search Results Contain an Alarming Amount of Disinformation

A bit long, but worth some time and a discussion. I never think about how search engines try to determine the veracity of the websites returned in a search.

Google and Amazon are now in the oil business

File this under "Do as I say, not as I do."

Seven Ways to Think Like a Programmer

An essay about data that warmed my heart. I think a lot of this applies to every role, especially for those of us inside IT.

The other side of Stack Overflow content moderation

Start this post by reading the summary, then take in some of the specific cases he downvoted. The short of it is this: humans are horrible at communicating through texts, no matter what the forum.

This Is How To Change Someone’s Mind: 6 Secrets From Research

If you want to have more success at work, read this post. I bet you can think of previous discussions at work and understand where things went wrong.

For New Year's Eve I made something special - 6 pounds of pork belly bites in a honey soy sauce. They did not last long. No idea what everyone else ate, though.

IMG_3763.JPG

Read more
1 31 764
Level 17

I visited the Austin office this past week, my last trip to SolarWinds HQ for 2019. It’s always fun to visit Austin and eat my weight in pork products, but this week was better than most. I took part in deep conversations around our recent acquisition of VividCortex.

I can’t begin to tell you how excited I am for the opportunity to work with the VividCortex team.

Well, maybe I can begin to tell you. Let’s review two data points.

In 2013, SolarWinds purchased Confio Software, makers of Ignite (now known as Database Performance Analyzer, or DPA) for $103 million. That’s where my SolarWinds story begins, as I was included with the Confio purchase. I was with Confio since 2010, working as a sales engineer, customer support, product development, and corporate marketing. We made Ignite into a best of breed monitoring solution that’s now the award-winning, on-prem and cloud-hosted DPA loved by DBAs globally.

The second data point is from last week, when SolarWinds bought VividCortex for $117.5 million. One thing I want to make clear is SolarWinds just doubled down on our investment in database performance monitoring. Anyone suggesting anything otherwise is spreading misinformation.

Through all my conversations last week with members of both product teams one theme was clear. We are committed to providing customers with the tools necessary to achieve success in their careers. We want happy customers. We know customer success is our success.

Another point that was made clear is the VividCortex product will complement, not replace DPA, expanding our database performance monitoring portfolio in a meaningful way. Sure, there is some overlap with MySQL, as both tools offer support for that platform. But the tools have some key differences in functionality. Currently, VividCortex is a SaaS monitoring solution for popular open-source platforms (PostgreSQL, MySQL, MongoDB, Amazon Aurora, and Redis). DPA provides both monitoring and query performance insights for traditional relational database management systems and is not yet available as a SaaS solution.

This is why we view VividCortex as a product to enhance what SolarWinds already offers for database performance monitoring. We’re now stronger this week than we were just two weeks ago. And we’re now poised to grow stronger in the coming months.

This is an exciting time to be in the database performance monitoring space, with 80% of workloads still Earthed. If you want to know about our efforts regarding database performance monitoring products, just AMA.

I can't wait to get started on helping build next-gen database performance monitoring tools. That’s what VividCortex represents, the future for database performance monitoring, and why this acquisition is so full of goodness. Expect more content in the coming weeks from me regarding our efforts behind the scenes with both VividCortex and DPA.

Read more
4 9 597
Level 17

I hope this edition of the Actuator finds you and yours in the middle of a healthy and happy holiday season. With Christmas and New Year's falling on Wednesday, I'll pick this up again in 2020. Until then, stay safe and warm.

As always, here's a bunch of stuff I found on the internet I thought you might enjoy.

Why Car-Free Streets Will Soon Be the Norm

I'm a huge fan of having fewer cars in the middle of any downtown city. I travel frequently enough to European cities and I enjoy the ability to walk and bike in areas with little worry of automobiles.

Microsoft and Warner Bros trap Superman on glass slide for 1,000 years

Right now, one of you is reading this and wondering how to monitor glass storage and if an API will be available. OK, maybe it's just me.

The trolls are organizing—and platforms aren't stopping them

This has been a problem with online communities since they first started; it's not a new problem.

New Orleans declares state of emergency following cyberattack

Coming to a city near you, sooner than you may think.

Facebook workers' payroll data was on stolen hard drives

"Employee wasn’t supposed to take hard drives outside the office..." Security is hard because people are dumb.

A Sobering Message About the Future at AI's Biggest Party

The key takeaway here is the discussion around how narrow the focus is for specific tasks. Beware the AI snake oil salesman promising you their algorithms and models work for everyone. They don't.

12 Family Tech Support Tips for the Holidays

Not a bad checklist for you to consider when your relatives ask for help over the holidays.

Yes, I do read books about bacon. Merry Christmas, Happy Holidays, and best wishes.

Read more
1 30 560
Level 10

I was in the pub recently for the local quiz and afterwards, I got talking to someone I hadn’t seen for a while. After a few minutes, we started discussing a certain app he loves on his new phone, but he wished the creators would fix a problem with the way it displayed information, so it looks like it does when he logs in on a web browser.

“It’s all got to do with technical debt,” I blurted out.

“What?” he replied.

“When they programmed the app, the programmers took an easier method rather than figure out how to display the details the same way as your browser to be able to ship it quicker to you, the consumer, and have yet to repay the debt. It’s like a credit card.”

It’s fine to have some technical debt, like having an outstanding balance on a credit card, and sometimes you can pay off the interest, i.e., apply a patch; but there comes a point when you need to pay off the balance. This is when you need to revisit the code and implement a section properly; and hence pay off the debt.

There are several reasons you accrue technical debt, one of which is lack of experience and inferior skills by the coding team. If the team doesn’t have the right understanding or skills to solve the problem, it’ll only get worse.

How can you help solve this? I’m a strong proponent of the education you can glean from attending a conference, whether it be Kubecon, Next, DEFCON, or AWS re:Invent, which I just attended. These are great places to sit down and discuss things with your peers, make new friends, discover fresh GitHub repositories, learn from experts, and hear about new developments in the field, possibly ahead of their release, which may either give you a new idea or help solve an existing problem. Another key use case for attending is the ability to provide feedback. Feedback loops are a huge source of information for developers. Getting actual customer feedback, good or bad, helps shape the short-term to long-term goals of a project and can help you understand if you’re on the right path for your target audience.

So, how do you get around these accrued debts? First, you need to have a project owner whose goal is to make sure the overall design and architecture is adhered to. It should also be their job to make sure coding standards are adhered to and documentation is created to accompany the project. Then with the help of regression testing and refactoring over time, you’ll find problems and defects in your code and be able to fix them. Any rework from refactoring needs to be planned and assigned correctly.

There are other ways to deal with debt, like bug fix days and code reviews, and preventative methods like regular clear communication between business and developer teams, to ensure the vision is implemented correctly and it delivers on time to customers.

Another key part of dealing with technical debt is taking responsibility and everyone involved with the project being aware of where they may have to address issues. By being open rather than hiding the problem, it can be planned for and dealt with. Remember, accruing some technical debt is always going to happen—just like credit card spending.

Read more
1 13 454
Level 17

Good morning! By the time you read this post, the first full day of Black Hat in London will be complete. I share this with you because I'm in London! I haven't been here in over three years, but it feels as if I never left. I'm heading to watch Arsenal play tomorrow night, come on you gunners!

As always, here's a bunch of links I hope you find interesting. Cheers!

Hacker’s paradise: Louisiana’s ransomware disaster far from over

The scary part is that the State of Louisiana was more prepared than 90% of other government agencies (HELLO BALTIMORE!), just something to think about as ransomware intensifies.

How to recognize AI snake oil

Slides from a presentation I wish I'd created.

Now even the FBI is warning about your smart TV’s security

Better late than never, I suppose. But yeah, your TV is one of many security holes found in your home. Take the time to help family and friends understand the risks.

A Billion People’s Data Left Unprotected on Google Cloud Server

To be fair, it was data curated from websites. In other words, no secrets were exposed. It was an aggregated list of information about people. So, the real questions should now focus on who created such a list, and why.

Victims lose $4.4B to cryptocurrency crime in first 9 months of 2019

Crypto remains a scam, offering an easy way for you to lose real money.

Why “Always use UTC” is bad advice

Time zones remain hard.

You Should Know These Industry Secrets

Saw this thread in the past week and many of the answers surprised me. I thought you might enjoy them as well.

You never forget your new Jeep's first snow.

jeepsnow.jpg

Read more
0 31 523
Level 10

It’s time to stop seeing IT as just a black hole to throw money towards and instead, give back to the business. Often, the lack of proper measurement and standardization are the problem, so how do we address this?

We’ve all tried to leverage more funds from the organization to undertake a project. Whether replacing aging hardware or out-of-support software, without the business seeing the value the IT department brings, it’s hard to obtain the financing required. That’s because the board typically sees IT as a cost center.

A cost center is defined as a department within a business not directly adding profit, but still requiring money to run. A profit center is the inverse, whereby its operation directly adds to overall company profitability.

But how can you get there? You need to understand everything the IT department does is a process, and thereby can be measured and thus improved upon. Combining people and technology, along with other assets, and passing them through a process to deliver customer success or business results is a strategic outcome you want to enhance. If you can improve the technology and processes, then you’ll increase the strategic outcomes.

By measuring and benchmarking these processes, you can illustrate the increases made due to upgrades. Maybe a web server can now support 20% more traffic, or its loading latency has been reduced by four seconds, which over a three-month period has led to an increase of 15% web traffic and an 8% rise in sales. While I’ve just pulled these figures from the air, the finance department evaluates spending and can now see a tangible return on the released funds. The web server project increased web traffic, sales, and therefore profits. When you approach the next project you want to undertake, you don’t have to just say “it’s faster/newer/bigger than our current system” (although you should add to your discussion the faster, newer, bigger piece of hardware or software will provide an improvement to the overall system). However, without data, you have no proof, and the proof is in the pudding, as they say. Nothing will make a CFO happier than seeing a project with milestones and KPIs (well, maybe three quarters exceeding predictions). So, how do we measure and report all these statistics?

If you think of deployments in terms of weeks or months, you’re trying to deploy something monolithic yet composed of many moving and complex parts. Try to break this down. Think of it like a website. You can update one page without having to do the whole site. Then you start to think “instead of the whole page, what about changing a .jpg, or a background?” Before long, you’ve started to decouple the application at strategic points, allowing for independent improvement. At this stage, I’d reference the Cloud Native Computing Foundation Trail Map as a great way to see where to go. Their whole ethos on empowering organizations running modern scalable applications can help you with transformation.

But we’re currently looking at the measurement aspect of any application deployment. I’m not just talking about hardware or network monitoring, but a method of obtaining baselines and peak loads, and of being able to predict when a system will reach capacity and how to react to it.

Instead of being a reactive IT department, you suddenly become more proactive. Being able to assign your ticketing system to your developers allows them to react faster to any errors from a code change and quickly fix the issue or revert to an earlier deployment, thereby failing quickly and optimizing performance.

I suggest if you’re in charge of an application or applications, or support one on a daily basis, start to measure and record anywhere you can on the full stack. Understand what normal looks like, or how it’s different from “rush hour,” so you can say with more certainty it’s not the network. Maybe it’s the application, or it’s the DNS (it’s always DNS) leading to delays, lost revenue, or worse, complete outages. Prove to the naysayers in your company you have the correct details and that 73.6% of the statistics you present aren’t made up.

Read more
0 14 372
Level 17

I am back in Orlando this week for Live 360, where I get to meet up with 1,100 of my close personal data friends. If you're attending this event, please find me--I'm the tall guy who smells like bacon.

As always, here are some links I hope you find interesting. Enjoy!

Google will offer checking accounts, says it won’t sell the data

Because Google has proved itself trustworthy over the years, right?

Google Denies It’s Using Private Health Data for AI Research

As I was just saying...

Automation could replace up to 800 million jobs by 2035

Yes, the people holding those jobs will transition to different roles. It's not as if we'll have 800 million people unemployed.

Venice floods: Climate change behind highest tide in 50 years, says mayor

I honestly wouldn't know if Venice was flooded or not.

Twitter to ban all political advertising, raising pressure on Facebook

Your move, Zuck.

California man runs for governor to test Facebook rules on lying

Zuckerberg is doubling down with his stubbornness on political ads. That's probably because Facebook revenue comes from such ads, so ending them would kill his bottom line.

The Apple Card Is Sexist. Blaming the Algorithm Is Proof.

Apple, and their partners, continue to lower the bar for software.

Either your oyster bar has a trough or you're doing it wrong. Lee & Rick's in Orlando is a must if you are in the area.

leeandrick.jpg

Read more
0 33 728
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Brandon Shopp with ideas for protecting IT assets in battlefield situations.

“Dominance” and “protection” sum up the Defense Department’s goals as U.S. armed forces begin to modernize their networks and communications systems. DOD is investing significant resources in providing troops with highly advanced technology so they can effectively communicate with each other and allies in even the harshest environments.

Efforts like the Army’s ESB-E tactical network initiative, for example, represent an attempt to keep warfighters constantly connected through a unified communications network. These solutions will be built off more scalable, adaptable, and powerful platforms than those provided by older legacy systems.

Programs like ESB-E are being designed to provide wide-scale communications in hostile territory. It will be incumbent upon troops in the field to monitor, manage, and secure the network to fulfill the “protection” part of DOD’s two-fisted battlefield domination strategy.

Moving forward to take this technological hill, DOD should keep these three considerations in mind.

  1. 1. The Attack Surface Will Increase Exponentially

Over the years, the battlefield has become increasingly kinetic and dependent upon interconnected devices and even artificial intelligence. The Army Research Laboratory calls this the internet of battlefield things—a warzone with different points of contact ultimately resulting in everything and everyone being more connected and, thus, intelligent.

The Pentagon is looking to take the concept as far as possible to give warfighters a tactical and strategic edge. For example, the Army wants to network soldiers and their weapons systems, and the Navy plans to link its platforms across hundreds of ships.

Opening these communication channels will significantly increase the potential attack surface. The more connection points, the greater the threat of exposure. Securing a communications system of such complexity will prove to be a far more daunting challenge than what’s involved in monitoring and managing a traditional IT network. Armed forces must be prepared to monitor, maintain, and secure the entire communications system.

  1. 2. Everyone Must Have Systems Expertise

The line between soldiers and system administrators has blurred as technology has advanced into the battlefield. As communications systems expand, all service members must be able to identify problems to ensure both unimpeded and uninterrupted communications and the security of the information being exchanged.

All troops must be bought into the concept of protecting the network and its communications components and be highly skilled in managing and maintaining these technologies. This is particularly important as communications solutions evolve.

Soldiers will need to quickly secure communications tools if they’re compromised, just as they would any other piece of equipment harboring sensitive information or access points. And they will require clear visibility into the entirety of the network to be able to quickly pinpoint any anomalies.

  1. 3. Staff Must Increase Commensurate to the Size of the Task

The armed forces must bulk up on staff to support these expansive modern communications systems. Fortunately, the military has a wealth of individuals with network and systems administration experience. Unfortunately, they lack in other critical areas.

Security specialists remain in high demand, but the cybersecurity workforce gap is real, even in the military. The White House’s National Cyber Strategy offers some good recommendations, including reskilling workers from other disciplines and identifying and fostering new talent. The actions highlighted in the plan coalesce with DOD’s need to fortify and strengthen its cybersecurity workforce as it turns its focus toward relentlessly winning the battlefield communications war.

Whoever wins this war will truly establish dominance over air, land, sea, and cyberspace. Victory lies in educating and finding the right personnel to protect information across what will undoubtedly be a wider and more attractive target for America’s adversaries.

Find the full article on Government Computer News.

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 10 397
Level 10

In the IT industry, you’ll hear “I’ll sell you a DevOps; how much is it worth?” But the joke’s on you because you can’t sell (or buy) DevOps, as it is, in fact, an intangible entity. It’s a business process combining software development (Dev) and IT management processes (Ops) with the aim of helping teams understand what goes into making and maintaining applications and business processes. All this happens while working as a team to improve the overall performance and stability of said apps and processes rather than “chucking it over the fence” once your department’s piece of the puzzle is finished.

DevOps is often referred to as a journey, and you probably need to pass several milestones before you could consider your company a DevOps house. Several of the major milestones stem from the idea of adopting a blue/green method of deployment, in which you deploy a new version of your code (blue) running alongside the current version (green) and slowly move production traffic over to the new blue deployment while monitoring the application to see if improvements have been made. Once all the traffic is running on the blue version, you can stage the next change on the green environment. If the blue deployment is a detriment to the application, it’s backed out and all traffic reverts to the current green version.

A key part of the above blue/green deployment is a methodology of continuous integration and continuous deployment (CI/CD), whereby minor improvements are always being undertaken with the goal of optimizing the software and the hardware it runs on. To get to this point you need to make sure you have a system in place to continuously deploy to production, as well as a platform for continual testing. Your QA processes need to tackle everything from user integration to vulnerability testing and change management, and since you don’t want to have to be hunting around finding IP addresses or resource pools to run it on, automation is going to be key.

As you move towards CI/CD adoption rather than separate coding and testing phases, you begin to test as the code is being written. In turn, you’ll start to automate this testing and eventual movement into production, which is referred to as a deployment pipeline. Finally, you’ll also need a more detailed way of performance monitoring, hardware monitoring, software monitoring, and logging. With performance monitoring, it’s no longer good enough to look at network latency—you need to have a way to understand the performance process, including the IO to an application stack, the amount of code commits and bugs identified, the vulnerabilities being handled, and the environment’s health status. With so many moving parts, you’ll also need something to ingest the logs and give you greater insights and analysis to your environment.

But for all this to be undertaken, the first and possibly most major hurdle you’ll have to clear is the cultural shift within the organization. Willingness to cooperate truthfully and honestly as well as making failure less expensive is at the core of this shift. This cultural move must be led from the top down within the company. Making IT ops, software development, and security stop pointing the finger at each other and understand they all have a shared responsibility in the other departments’ undertaking can be a challenge, but if they’re properly incentivized and understand the overall goal, this shift can be a smoother process for an organization.

This building of the correct foundation as per the above milestones allows you thus to move from getting started into the five stages of DevOps evolution: Normalization, Standardization, Expansion, Automated Infrastructure Delivery, and Self-Service. Companies moving into the Normalization stage adhere to true agile methods, and the speed at which they invoke changes begins to increase, so with time they’re no longer hanging around like a loris, taking days or weeks to patch critical vulnerabilities, but move and adapt with the speed of a peregrine falcon.

In the recent Puppet 2019 State of DevOps report, they try to raise the idea of improving your security stance by moving through the five stages of evolution so you can adapt quickly to vulnerabilities. For instance, about 7% of those surveyed can respond within an hour. Those organizations with fully integrated security practices have the highest levels of DevOps evolution. This evolution, in turn, will let you soar through the clouds.

Read more
1 12 380
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Brandon Shopp about improving systems management for Microsoft environments. He explores a range of considerations.

Microsoft offers some of its own monitoring options, such as System Center and Windows Admin Center. Federal IT pros can optimize performance by including additional monitoring strategies such as monitoring Windows servers, Microsoft applications, databases, Hyper-V, Azure, and Office 365.

Monitoring Strategies

Windows Servers

Identifying a performance issue involves understanding what’s not operating efficiently. In a Microsoft environment, this means knowing the operating system isn’t part of the problem.

To gain this knowledge, consider tools capable of focusing on the Windows servers to provide highly-specific information and help pinpoint—or rule out—a server-based issue.

Microsoft Applications

It can be impossible to truly understand application health—and, in turn, performance—without understanding how well Microsoft application services, processes, and components are operating.

To get this critical information, consider a tool that gives the federal IT team the ability to:

•Isolate page-load speeds based on location, application components, or underlying server infrastructure

•Monitor requests per second, throughput, and request wait time

•Identify the root cause of problems by monitoring key performance metrics, including request wait time and SQL query executing time

•Identify which webpage elements are slow and affect overall webpage application performance

A greater understanding of the performance levels of the processes feeding in to and out of applications can prove invaluable when trying to identify higher-level application performance issues.

Databases

Every federal IT pro knows monitoring database performance is a must.

Specifically, be sure to invest in a tool with the ability to troubleshoot performance problems in real-time and historically. The historical perspective will allow the team to identify a baseline, so they can better understand the severity of a slowdown. This perspective will then allow the ability to analyze the database workload to identify inefficiencies. Ideally, the tool of choice will also provide SQL Server index recommendations as well as alerting and reporting capabilities.

Hyper-V

For optimized virtual infrastructure performance, be sure to optimize Microsoft Hyper-V—the company’s virtualization platform.

One of the best ways to do this is by understanding and optimizing the size of virtual machines through capacity planning. It’s also possible to take this even further by predicting the behavior of the virtual environment and solving potential issues before they escalate.

Not all tools will provide these capabilities, so choose wisely.

Azure

Many federal IT pros believe cloud monitoring is in the hands of the cloud provider. Not so. It’s possible—and highly recommended—to monitor the cloud infrastructure and transit to help ensure optimized system and application performance.

For example, a good tool will provide the ability to monitor Azure-based applications with as much visibility as on-premises applications. A better tool will go even further and allow the federal IT pro to measure the performance of each network node inside the cloud and to analyze historical performance data to pinpoint a timeframe if performance has degraded.

Microsoft offers a tool called Azure Monitor, which allows the federal IT pro to collect performance and utilization data, activity and diagnostics logs, and notifications from various Azure resources. Azure Monitor integrates with other analytics and monitoring tools, which is a plus for larger environments supporting a range of different types of products and services from a range of vendors.

For further peace of mind—and to help protect against data loss—look for the ability to back up emails to a secondary location.

Conclusion

Operating in a Microsoft-centric world doesn’t mean the federal IT pro must rely only on Microsoft products and services to help optimize performance. Yes, Microsoft has excellent options. But more out there can go a long way toward ensuring a top-performance environment on site or in the Azure cloud.

Find the full article on our partner DLT’s blog Technically Speaking.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 10 373
Level 17

Home this week and getting ready for Microsoft Ignite next week in Orlando. If you're at Ignite, please stop by the booth and say hello. I love talking data with anyone.

As always, here's a bunch of links I found interesting. Enjoy!

Microsoft beats Amazon to win the Pentagon’s $10 billion JEDI cloud contract

The most surprising part of this is an online bookstore thought they were the frontrunner. This deal underscores the difference between an enterprise software company with a cloud, and an enterprise infrastructure hosting company that also sells books.

Google claims it has achieved 'quantum supremacy' – but IBM disagrees

You mean Google would embellish upon facts to make themselves look better? Color me shocked.

Amazon migrates more than 100 consumer services from Oracle to AWS databases

"Amazon doesn't run on Oracle; why should you?"

“BriansClub” Hack Rescues 26M Stolen Cards

Counter-hacking is a thing. Expect to see more stories like this one in the coming years.

Berkeley City Council Unanimously Votes to Ban Face Recognition

Until the underlying technology improves, it's best for us to disallow the use of facial recognition for law enforcement purposes.

China’s social credit system isn’t about scoring citizens — it’s a massive API

Well, it's likely both, and a possible surveillance system. But if it keeps jerks away from me when I travel, I'm all for it.

Some Halloween candy is actually healthier than others

Keep this in mind when you're enforcing the Dad Tax on your kid's candy haul tomorrow night.

Every now and then my fire circle regresses to its former life as a pool.

water-circle.jpg

Read more
1 47 1,037
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Mav Turner with suggestions on improving your agency’s FITARA score. FITARA rolls up scores from other requirements and serves to provide a holistic view of agency performance.

The most recent version of the scorecard measuring agency implementation of the Federal IT Acquisition Reform Act gave agencies cause for both celebration and concern. On the whole, scores in December’s FITARA Scorecard 7.0 rose, but some agencies keep earning low scores.

Agencies don’t always have the appropriate visibility into their networks to allow them to be transparent. All agencies should strive for better network visibility. Let’s look at how greater visibility can help improve an agency’s score and how DevOps and agile approaches can propel their modernization initiatives.

Software Licensing

Agencies with the lowest scores in this category failed to provide regularly updated software licensing inventories. This isn’t entirely surprising; after all, when licenses aren’t immediately visible, they tend to get forgotten or buried as a budget line item. Out of sight, out of mind.

However, the Making Electronic Government Accountable by Yielding Tangible Efficiencies Act (MEGABYTE Act) of 2016 is driving agencies to make some changes. MEGABYTE requires agencies to establish comprehensive inventories of their software licenses and use automated discovery tools to gain visibility into and track them. Agencies are also required to report on the savings they’ve achieved by optimizing their software licensing inventory.

Even if an agency doesn’t have an automated suite of solutions, it can still assess their inventory. This can be a great exercise for cleaning house and identifying “shelfware,” software purchased but no longer being used.

Risk Management

Risk management is directly tied to inventory management. IT professionals must know what applications and technologies comprise their infrastructures. Obtaining a complete understanding of everything within those complex networks can be daunting, but there are solutions to help.

Network and inventory monitoring technologies can give IT professionals insight into the different components affecting their networks, from mobile devices to servers and applications. They can use these technologies to monitor for potential intrusions and threats, but also to look for irregular traffic patterns and bandwidth issues.

Data Center Optimization

Better visibility can also help IT managers identify legacy applications to modernize. Knowing which applications are being used is critical to being able to determine which ones should be removed and where to focus modernization efforts.

Unfortunately, agencies discover they still need legacy solutions to complete certain tasks. They get stuck in a vicious circle where they continue to add to, not reduce, their data centers. Their FITARA scores end up reflecting this struggle.

Applying a DevOps approach to modernization can help agencies achieve their goals. DevOps is often based on agile development practices enabling incremental improvements in short amounts of time; teams see what they can realistically get done in three to five weeks. They prioritize the most important projects and strive for short-term wins. This incremental progress can build momentum toward longer-term goals, including getting all legacy applications offline and reducing costly overhead.

While visibility and transparency are essential for improvements across all these categories, FITARA scorecards themselves are also useful for shining light on the macro problems agencies face today. They can help illuminate areas of improvement, so IT professionals can prioritize their efforts and make a significant difference to their organizations. Every government IT manager should stay up-to-date on the scoring methodologies and how other agencies are doing.

Find the full article on Government Computer News.

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 10 488
Level 17

If you’ve read my posts for any length of time, you know I sometimes get caught up in side projects. Whether it’s writing an eBook, creating a series of blog posts about custom SolarWinds reports, or figuring out how to make JSON requests in Perl, when my ADD and inspiration team up to conspire against me, I have no choice but to follow. The good news is I usually learn something interesting along the way.

That’s what this series of posts is going to be about—yet another trip down the technical rabbit hole of my distractibility. Specifically, I implemented Pi-Hole on a spare Raspberry Pi at home, and then decided it needed to be monitored.

In the first part of the series (today’s post), I’m going to give some background on what Pi-Hole and the Raspberry Pi are and how they work. In the next installment, I’ll cover how to monitor it using SolarWinds Server & Application Monitor (SAM).

If you’re impatient, you can download all three of the templates I created from the THWACK content exchange. The direct links are here:

Please note these are provided as-is, for educational purposes only. Do not hold the author, the author’s company, or the author’s dog responsible for any hair loss, poor coffee quality, or lingering childhood trauma.

What Is a Raspberry Pi?

This is a whole computer on a 3.5” x 2.25” board. For those who haven’t had exposure to these amazing little devices, a Raspberry Pi is a small, almost credit-card-sized full computer on a single board. It has a CPU, onboard memory, GPU, and support hardware for a keyboard, mouse, monitor, and network connection.While most people use the operating system “Raspbian” (a Linux Debian variation), it also supports several other OS options built off variants of Linux, RISC, and even Microsoft Windows.

What Is Pi-Hole?

Pi-Hole software makes your home (or, work, if your IT group is open-minded enough) network faster and safer by blocking requests to malicious, unsavory, or just plain obnoxious sites. If you’re using Pi-Hole, it’ll be most noticeable when advertisements on a webpage fail to load like this:

BEFORE: pop-overs and hyperbolic ads.

************************

AFTER: No pop-overs, spam ads blocked

But under the hood, it’s even more significant:

BEFORE: 45 seconds to load

​​

*******************

AFTER: 6 seconds to load

Look in the lower-right corner of each of those images. Load time without Pi-Hole was over 45 seconds. With it, the load time was 6 seconds.You may not think there are many of these, but your computer is making calls out to these sites all the time. Here are the statistics from my house on a typical day.

The Pi-Hole software was originally built for the Raspberry Pi, but has since extended to run on full computers (or VMs) running Ubuntu, CentOS, Debian, or Fedora; or on docker containers hosted on those systems. That said, I’m focusing on the original, Raspberry Pi-based version for this post.

What Is This API?

If you’ve already dug into APIs as part of your work, you can probably skip this section. Otherwise, read on!An Application Programming Interface is a way of getting information out of (or sometimes into) a program without using the normal interface. In the case of Pi-Hole, I could go to the web-based admin page and look at statistics on each screen, but since I want to pull those statistics into my SolarWinds monitoring system, I’m going to need something a bit more straightforward. I want to be able to effectively say directly to Pi-Hole, “How many DNS queries have you blocked so far today?” and have Pi-Hole send back “13,537” without all the other GUI frou-frou.SHAMELESS PROMOTION: If you find the idea of APIs exciting and intriguing, then I should point you toward the SolarWinds Orion Software Developer Kit (SDK)—a full API supporting the language of your choice (Yes, even Perl. Trust me. I tried it.). There’s a whole forum on THWACK dedicated to it. Head over there if you want to find out how to add nodes, assign IP addresses, acknowledge alerts, and other forms of monitoring wizardry.

How Does the Pi-Hole API Work?

If you have Pi-Hole running, you get to the API by going to http://<your pi-hole url>/admin/api.php.There are two modes to extracting data—summary and authorized. Summary mode is what you get when you hit the URL I gave above. It will look something like this:

{”domains_being_blocked”:115897,”dns_queries_today”:284514,”ads_blocked_today”:17865,”ads_percentage_today”:6.279129,”unique_domains”:14761,”queries_forwarded”:216109,”queries_cached”:50540,”clients_ever_seen”:38,”unique_clients”:22,”dns_queries_all_types”:284514,”reply_NODATA”:20262,”reply_NXDOMAIN”:19114,”reply_CNAME”:16364,”reply_IP”:87029,”privacy_level”:0,”status”:”enabled,””gravity_last_updated”:{”file_exists”:true,”absolute”:1567323672,”relative”:{”days”:”3,””hours”:”09,””minutes”:”53”}}}

If you look at it with a browser capable of formatting JSON data, it looks a little prettier:

Meanwhile, the authorized version is specific to certain data elements and requires a token you get from the PiHole itself. You view the stats by adding ?”<the value you want>” along with “&auth=<your token>” to the end of the URL, so to get the TopItems data, it would look something like this:http://192.168.101.10/admin/api.php?topItems&auth=0123456789abcdefg012345679

And the result would be:

You get a token by going to the Pi-Hole dashboard, choosing Settings, clicking the “API/Web Interface” tab, and clicking the “Show Token” button. Meanwhile, the values requiring a token are described on the Discourse page for the Pi-Hole API.

Until Next Time

That’s it for now. In my next post of the series, I’ll dig deep into building the SAM template. Your homework is to repurpose, dust off, or buy a Raspberry Pi, load it up with Pi-Hole, and get it configured. Then you’ll be ready to try out the next steps when I come back.And if you want to have those templates ready to go, you can download them here:

Read more
7 23 1,447
Level 17

In Austin this week for THWACKcamp. I hope you're watching the event and reading this post later in the day. We tried a new format this year--I hope you enjoy what we built.

As always, here are some links I found interesting this week. Enjoy!

GitHub renews controversial $200,000 contract with ICE

“At GitHub, we believe in empowering developers around the world. We also believe in basic human rights, treating people with respect and dignity, and cold, hard, cash.”

NASA has a new airplane. It runs on clean electricity

I hope this technology doesn't take 30 years to come to market.

Revealed: the 20 firms behind a third of all carbon emissions

Maybe we need to work on electric projects for these companies instead.

WeWork expected to cut 500 tech roles

It seems every week there's another company collapsing under the weight of the absurdity of the business model.

Visa, MasterCard, Stripe, and eBay all quit Facebook’s Libra in one day

I don't understand why they were involved to begin with.

Linus Torvalds isn't concerned about Microsoft hijacking Linux

Microsoft is absolutely a different company. It's good to see Linus acknowledge this.

Elizabeth Warren trolls Facebook with 'false' Zuckerberg ad

Here's a thought - maybe don't allow any political ads on Facebook. That way we don't have to worry about what is real or fake. Of course that can't happen, because Facebook wants money.

The leaves have turned, adding some extra color to the fire circle.

fall.jpg

Read more
1 32 709
Level 10

During this series, we’ve looked at virtualization’s past, the challenges it has created, and how its evolution will allow it to continue to play a part in infrastructure delivery in the future. In this final part, I’d like to draw these strands together and share how I believe the concept of virtualization will continue to be a core part of infrastructure delivery, even if it’s a little different from what we’re used to.

Will Cloud Be the End of Virtualization?

When part one of this series was published, one question caught my attention: “Will public cloud kill virtualization?”

I hadn’t considered this for this series, but it intrigued me nonetheless.

It caught my attention not because I believe cloud will be its end, but because it’s played a significant part in redefining the way we think about infrastructure delivery, and consequently, how we need to think about virtualization.

Redefining Infrastructure

This redefinition is not just a technical one; it’s also a fundamental shift in focus. When we discuss infrastructure in a cloud environment, we don’t think about vendor preferences, hardware configs, or individual component configs. We focus on the service, from virtual machine to AI and all in between. Our focus is the outcome—what the service delivers—not the technicalities of how we deliver it.

I believe this change in expectation drives the future evolution of virtualization.

Virtualizing All the Things

This future is based on virtualizing increasing elements of our infrastructure stack, not just servers, but networking and storage. It's about abstracting the capabilities of each of these elements from specific custom hardware and allowing them to be deployed in any compatible environment.

More than this, making more of our infrastructure software-based allows us to more easily automate deployment, deliver our infrastructure as code, and provide the flexibility and portability a modern enterprise demands.

Abstracting Even Further

However, this isn’t where the evolution of virtualization stops. We’re already seeing the development of its next phase.

New models like containerization and serverless functions are not only abstracting the reliance on hardware but also on operating systems. They’re designed to be ephemeral, created, or called on-demand, delivering their outcome before disappearing, to be recreated whenever or wherever they’re needed and not remaining in our infrastructure forever, creating an endless sprawl of virtual resources.

Virtualization Next

Virtualizing infrastructure has, over the last 20 years, transformed the way we deliver our IT systems and has allowed us to deliver models focused on outcomes, provide flexibility, and quickly meet our needs.

At the start of this series we asked whether virtualization has a future, and as we start to not only rethink how we deliver infrastructure, but also how we architect it, does virtualization have a place?

The new architectures we’re building are inspired by the large cloud providers, to be built at scale, deployed at speed, anywhere we want it, without much consideration of the underlying infrastructure and where appropriate to exist only as needed.

Virtualization remains at the very core of these new infrastructures, whether as software-defined, containers, or serverless. These are all evolutions of the virtualization concept and while it continues to evolve, it will remain relevant for some time to come.

I hope you’ve enjoyed this series. Thank you for your comments and getting involved. Hopefully it’s provided you with some new ideas around how you can use virtualization in your infrastructure now and in the future.

Read more
1 8 532
Level 17

Can you believe THWACKcamp is only a week away?! Behind the scenes, we start working on THWACKcamp in March, maybe even earlier. I really hope you like what we have in store for you this year!

As always, here are some links I found interesting this week. Enjoy!

Florida man arrested for cutting the brakes on over 100 electric scooters

As if these scooters weren't already a nuisance, now we have to worry about the fact that they could have been tampered with before you use one. It's time we push back on these thing until the service providers can demonstrate a reasonable amount of safety.

Groundbreaking blood test could detect over 20 types of cancer

At first I thought this was an old post for Theranos, but it seems recent, and from an accredited hospital. As nice as it would be to have better screening, it would be nicer to have better treatments.

SQL queries don't start with SELECT

Because I know some of y'all write SQL every now and then, and I want you to have a refresher on how the engine interprets your SELECT statement to return physical data from disk.

Facebook exempts political ads from ban on making false claims

This is fine. What's the worst that could happen?

Data breaches now cost companies an average of $1.41 million

But only half that much for companies with good security practices in place.

Decades-Old Code Is Putting Millions of Critical Devices at Risk

Everything is awful.

How Two Kentucky Farmers Became Kings Of Croquet, The Sport That Never Wanted Them

A bit long, but worth the time. I hope you enjoy the story as much as I did.

Even as the weather turns cold, we continue to make time outside in the fire circle.

fire-circle.JPG

Read more
0 33 626
Level 10

vSphere, which many consider to be the flagship product of VMware, is virtualization software with the vCenter management software and its ESXi hypervisor. vSphere is available in three different licenses: vSphere Standard, vSphere Enterprise Plus, and vSphere Platinum. Each comes with a different cost and set of features. The current version for vSphere is 6.7, which includes some of the following components.

Have a spare physical server lying around that can be repurposed? Voila, you now have an ESXi Type 1 Hypervisor. This type of hypervisor runs directly on a physical server and doesn’t need an operating system. This is a perfect use case if you have an older physical server lying around that meets the minimum requirements. The disadvantages to this setup include higher costs, a rack server, higher power consumption, and lack of mobility.

What if you don’t have a physical server at your disposal? Your alternative is an ESXi Type 2 Hypervisor because it doesn’t run on a physical server but requires an operating system. A great example is my test lab, which consists of a laptop with the minimum requirements. The laptop includes Windows 10 Pro as its host operating system, but I have my lab running in a virtual image via VMware Workstation. The advantages to this setup include minimal costs, lower power consumption, and mobility.

To provide some perspective, the laptop specifications are listed below:

  • Lenovo ThinkPad with Windows 10 Pro as the host operating system
  • Three hard drives: (1) 140GB as the primary partition and (2) 465GB hard drives to act as my datastores (DS1 and DS2 respectively) with 32GB RAM
  • One VMware ESXi Host (v6.7, build number 13006603)
  • Four virtual machines (thin provisioned)
    • Linux Cinnamon 19.1 (10GB hard drive, 2GB RAM, one vCPU)
    • Windows 10 Pro 1903 (50GB hard drive, 8GB RAM, two vCPUs)
    • Windows Server 2012 R2 (60GB hard drive, 8GB RAM, two vCPUs)
    • Pi-Hole (20GB hard drive, 1GB RAM, one vCPU)

With the introduction of vSphere 6.7, significant improvements were created over its predecessor vSphere 6.5. Some of these improvements and innovations include:

  • Simple and efficient management at scale
  • Two times faster than v6.5
  • Three times less memory consumption
  • New APIs improve deployment and management of the vCenter Appliance
  • Single reboot and vSphere Quick Boot reduce upgrade and patching times
  • Comprehensive built-in security for the hypervisor and guest OS also secures data across the hybrid cloud
  • Integrates with vSAN, NSX, and vRealize Suite
  • Supports mission-critical applications, big data, artificial intelligence, and machine learning
  • Any workloads can be run, including hybrid, public, and private clouds
  • Seamless hybrid cloud experience with a single pane of glass to manage multiple vSphere environments on different versions between an on-premises data center and any vCenter public, like VMware on AWS

If you’re interested in learning more about vSphere, VMware provides an array of options to choose from, including training, certifications, and hands-on labs.

Read more
0 14 438