Geek Speak Blogs

cancel
Showing results for 
Search instead for 
Did you mean: 
Create Post

Geek Speak Blogs

Level 12

Here’s an interesting article by my colleague Mav Turner, who offers suggestions for scaling monitoring to support large complex networks.

Read more...

Read more
0 2 44
Level 12

Here’s an interesting article by my colleague Mav Turner about the increasing use of automation, and the benefits and changing skills needed to be successful.

Read more...

Read more
1 2 92
Level 17

This edition of the Actuator comes to you from my kitchen, where I'm enjoying some time at home before I hit the road. I'll be at RSA next week, then Darmstadt, Germany the following week. And then I head to Seattle for the Microsoft MVP Summit. This is all my way of saying future editions of the Actuator may be delayed. I'll do my best, but I hope you understand.

As always, here's a bunch of links I hope you will find useful. Enjoy!

It doesn’t matter if China hacked Equifax

No, it doesn't, because the evidence suggests China was but one of many entities that helped themselves to the data Equifax was negligent in guarding.

Data centers generate the same amount of carbon emissions as global airlines

Machine learning, and bitcoin mining, are large users of power in any data center. This is why Microsoft has announced they'll look to be carbon neutral as soon as possible.

Delta hopes to be the first carbon neutral airline

On the heels of Microsoft's announcement, seeing this from Delta gives me hope many other companies will take action, and not issue press releases only.

Apple’s Mac computers now outpace Windows in malware and virus

Nothing is secure. Stay safe out there.

Over 500 Chrome Extensions Secretly Uploaded Private Data

Everything is terrible.

Judge temporarily halts work on JEDI contract until court can hear AWS protest

This is going to get ugly to watch. You stay right there, I'll go grab the popcorn.

How to Add “Move to” or “Copy to” to Windows 10’s Context Menu

I didn't know I needed this until now, and now I'm left wondering how I've lived so long without this in my life.

Our new Sunday morning ritual is walking through Forest Park. Each week we seem to find something new to enjoy.

048F2408-CFE5-4464-8C7A-842A9FFC1832.GIF

Read more
3 27 885
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by Brandon Shopp about DoD’s not-so-secret weapon against cyberthreats. DISA has created technical guidelines that evolve to help keep ahead of threats, and this blog helps demystify DISA STIGs.

The Defense Information Systems Agency (DISA) has a set of security regulations to provide a baseline standard for Department of Defense (DoD) networks, systems, and applications. DISA enforces hundreds of pages of detailed rules IT pros must follow to properly secure or “harden” the government computer infrastructure and systems.

If you’re responsible for a DoD network, these STIGs (Security Technical Implementation Guides) help guide your network management, configuration, and monitoring strategies across access control, operating systems, applications, network devices, and even physical security. DISA releases new STIGs at least once every quarter. This aggressive release schedule is designed to catch as many recently patched vulnerabilities as possible and ensure a secure baseline for the component in operation.

How can a federal IT pro get compliant when so many requirements must be met on a regular basis? The answer is automation.

First, let’s revisit STIG basics. The DoD developed STIGs, or hardening guidelines, for the most common components comprising agency systems. As of this writing, there are nearly 600 STIGs, each of which may comprise hundreds of security checks specific to the component being hardened.

A second challenge, in addition to the cost of meeting STIG requirements, is the number of requirements needing to be met. Agency systems may be made up of many components, each requiring STIG compliance. Remember, there are nearly 600 different versions of STIGs, some unique to a component, some targeting specific release versions of the component.

Wouldn’t it be great if automation could step in and solve the cost challenge while saving time by building repeatable processes? That’s precisely what automation does.

  • Automated tools for Windows servers let you test STIG compliance on a single instance, test all changes until approved, then push out those changes to other Windows servers via Group Policy Object (GPO) automation. Automated tools for Linux permit a similar outcome: test all changes due to STIG compliance and then push all approved changes as a tested, secure baseline out to other servers
  • Automated network monitoring tools digest system logs in real time, create alerts based on predefined rules, and help meet STIG requirements for Continuous Monitoring (CM) security controls while providing the defense team with actionable response guidance
  • Automated device configuration tools can continuously monitor device configurations for setting changes across geographically dispersed networks, enforcing compliance with security policies, and making configuration backups useful in system restoration efforts after an outage
  • Automation also addresses readability. STIGs are released in XML format—not the most human-readable form for delivering data. Some newer automated STIG compliance tools generate easy-to-read compliance reports useful for both security management and technical support teams

If you’re a federal IT pro within a DoD agency, you have an increasing number of requirements to satisfy. Let automation take some of the heavy lifting when it comes to compliance, so you and your team can focus on more pressing tasks.

Find the full article on Government Technology Insider.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 12 548

Some folks find a profession early in their life and that’s what they do until they’re old and gray and ready to retire. Other folks find themselves switching careers mid-way. Me? I’m on the third career in my working life. I know a thing or two about learning something new. Why do I bring this up? Because if you’re in the IT industry, chances are you’ll spend a big chunk of your professional life learning something new. There’s also a good chance you’ll have to sit for an exam or two to prove you’ve learned something new. And for some, that prospect overwhelms. If that’s you, keep reading! In this two-part series, I’m going to share my thoughts, tips, and tools for picking up a new skill.

Be the Captain of Your Own Ship

Sometimes you get told you need to have XYZ certification to qualify for the next pay raise. Sometimes the idea comes from your own self-motivation. Either way, the first step to successful completion is for you to make the commitment to the journey. Even if the idea isn’t your own, your personal commitment to the journey will be critical to its success. We’ve all seen what happens when there isn’t personal commitment. Someone gets assigned something they have no interest or desire. Whatever the task, it usually gets done half-heartedly and the results are terrible. You don’t want to be terrible. You want that cert. Make the commitment. It doesn’t have to be flashy or public, but it does have to authentic to you.

Make a New Plan, Stan...

Once you’ve made the decision to go after your goal, it’s time to make your plan. After all, no captain sets sail without first plotting a course. For certification-chasers, there is usually a blueprint out there with what the certification exam will cover. That’s a good place to start.

Charting your course should include things like:

  • A concrete, measurable goal.

  • A realistic timeline.

  • The steps to get from today to success[i].

Think about what hazards might impede your progress. After all, you don’t want to plot your course right through a reef. Things like:

  • How much time you can realistically devote to studying?

  • What stressors might affect your ability to stay on track?

  • Will your own starting point knowledge-wise make the journey longer or shorter?

Make Like a Penguin

If you’re a Madagascar[ii] fan, you know the penguin credo is “Never swim alone.” It’s great advice for penguins and for IT knowledge-seekers. Making your journey alone is like filling your bag with rocks before you start. It just makes life harder.

There are a ton of great online and real-life IT communities out there. Find one that works for you and get engaged. If your journey is at all like mine, at first you might just be asking questions. I know I asked a zillion questions in the beginning. These days I end up answering more questions than I ask, but I find answering others’ questions helps me solidify the strength of my knowledge. Another community is a formal study group. They help by providing structure, feedback on your progress, and motivation.

  Lastly, don’t forget about your friends and family. They might not know the subject matter, but they can make your road smoother by freeing up your time or giving you valuable moral support. Make sure they swim, too. This article has been a little high-level design for a successful certification journey. Stay tuned for the next installment. We’ll go low-level with some tips for getting to success one day at a time. Until then remember to keep doing it... just for fun!


[i] https://www.forbes.com/sites/biancamillercole/2019/02/07/how-to-create-and-reach-your-goals-in-4-ste...

[ii] https://www.imdb.com/title/tt0484439/characters/nm0569891

Read more
4 18 687
Level 17

This week's Actuator comes to you from Austin, as I'm in town to host SolarWinds Lab live. We'll be talking about Database Performance Monitor (nee VividCortex). I hope you find time to watch and bring questions!

As always, here's a bunch of links I hope you find useful. Enjoy!

First clinical trial of gene editing to help target cancer

Being close to the biotech industry in and around Boston, I heard rumors of these treatments two years ago. I'm hopeful our doctors can get this done, and soon.

What Happened With DNC Tech

Twitter thread about the tech failure in Iowa last week.

Analysis of compensation, level, and experience details of 19K tech workers

Wonderful data analysis on salary information. Start at the bottom with the conclusions, then decide for yourself if you want to dive into the details above.

Things I Believe About Software Engineering

There's some deep thoughts in this brief post. Take time to reflect on them.

Smart Streetlights Are Experiencing Mission Creep

Nice reminder that surveillance is happening all around us, in ways you may never know.

11 Reasons Not to Become Famous (or “A Few Lessons Learned Since 2007”)

A bit long, but worth the time. I've never been a fan of Tim or his book, but this post struck a chord.

Berlin artist uses 99 phones to trick Google into traffic jam alert

Is it wrong that I want to try this now?

I think I understand why they never tell me anything around here...

Read more
1 17 635
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Mav Turner with ideas about how the government could monitor and automate their hyperconverged infrastructure to help achieve their modernization objectives.

It’s no surprise hyperconverged infrastructure (HCI) has been embraced by a growing number of government IT managers, since HCI merges storage, compute, and networking into a much smaller and more manageable footprint.

As with any new technology, however, HCI’s wide adoption can be slowed by skeptics, such as IT administrators concerned with interoperability and implementation costs. However, HCI doesn’t mean starting from scratch. Indeed, migration is best achieved gradually, with agencies buying only what they need, when they need it, as part of a long-term IT modernization plan.

Let’s take a closer look at the benefits HCI provides to government agencies, then examine key considerations when it comes to implementing the technology—namely, the importance of automation and infrastructure monitoring.

Combining the Best of Physical and Virtual Worlds

Enthusiasts like HCI giving them the performance, reliability, and availability of an on-premises data center along with the ability to scale IT in the cloud. This flexibility allows them to easily incorporate new technologies and architectures into the infrastructure. HCI also consolidates previously disparate compute, networking, and storage functions into a single, compact data center.

Extracting Value Through Monitoring and Automation

Agencies are familiar with monitoring storage, network, and compute as separate entities; when these functions are combined with HCI, network monitoring is still required. Indeed, having complete IT visibility becomes more important as infrastructure converges.

Combining different services into one is a highly complex task fraught with risk. Things change rapidly, and errors can easily occur. Managers need clear insight into what’s going on with their systems.

After the initial deployment is complete monitoring should continue unabated. It’s vital for IT managers to understand the impact apps, services, and integrated components have on each other and the legacy infrastructure around them.

Additionally, all these processes should be fully automated. Autonomous workload acceleration is a core HCI benefit. Automation binds HCI components together, making them easier to manage and maintain—which in turn yields a more efficient data center. If agencies don’t spend time automating the monitoring of their HCI, they’ll run the risk of allocating resources or building out capacity they don’t need and may expose organizational data to additional security threats.

Investing in the Right Technical Skills

HCI requires a unique skillset. It’s important they invest in technical staff with practical HCI experience and the knowledge to effectively implement infrastructure monitoring and automation capabilities. These experts will be critical in helping agencies take advantage of the vast potential this technology has to offer.

Reaping the Rewards of HCI

Incorporating infrastructure monitoring and automation into HCI implementation plans will enable agencies to reap the full rewards: lower total cost of IT ownership thanks to simplified data center architecture, consistent and predictable performance, faster application delivery, improved agility, IT service levels accurately matched to capacity requirements, and more.

There’s a lot of return for simply applying the same level of care and attention to monitoring HCI as traditional infrastructure.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 5 498
Level 17

This week's Actuator comes to you from New England where it has been 367 days since our team last appeared in a Super Bowl. I'm still not ready to talk about it, though.

As always, here's a bunch of links I hope you find interesting. Enjoy!

97% of airports showing signs of weak cybersecurity

I would have put the number closer to 99%.

Skimming heist that hit convenience chain may have compromised 30 million cards

Looks like airports aren't the only industry with security issues.

It’s 2020 and we still have a data privacy problem

SPOILER ALERT: We will always have a data privacy problem.

Don’t be fooled: Blockchains are not miracle security solutions

No, you don't need a blockchain.

Google’s tenth messaging service will “unify” Gmail, Drive, Hangouts Chat

Tenth time is the charm, right? I'm certain this one will be the killer messaging app they have been looking for. And there's no way once it gets popular they'll kill it, either.

A Vermont bill would bring emoji license plates to the US

Just like candy corn, here's something else no one wants.

For the game this year I made some pork belly bites in a garlic honey soy sauce.

pastedImage_6.png

Read more
0 20 802
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Jim Hansen reviewing data from our cybersecurity survey, including details on how agencies are combatting threats.

According to a 2019 Federal Cybersecurity Survey released last year by IT management software company SolarWinds, careless and malicious insiders topped the list of security threats for federal agencies. Yet, despite the increased threats, federal IT security pros believe they’re making progress managing risk.

Why the positive attitude despite the increasing challenge? While threats may be on the rise, strategies to combat these threats—such as government mandates, security tools, and best practices—are seeing vast improvements.

Greater Threat, Greater Solutions

According to the Cybersecurity Survey, 56% of respondents said the greatest source of security threats to federal agencies is careless and/or untrained agency insiders; 36% cited malicious insiders as the greatest source of security threats.

Most respondents cited numerous reasons why these types of threats have improved or remained in control, from policy and process improvements to better cyberhygiene and advancing security tools.

•Policy and process improvements: 58% of respondents cited “improved strategy and processes to apply security best practices” as the primary reason careless insider threats have improved.

•Basic security hygiene: 47% of respondents cited “end-user security awareness training” as the primary reason careless insider threats have improved.

•Advanced security tools: 42% of respondents cited “intrusion detection and prevention tools” as the primary reason careless insider threats have improved.

“NIST Framework for Improving Critical Infrastructure Cybersecurity” topped the list of the most critical regulations and mandates, with FISMA (Federal Information Security Management Act) and DISA STIGs (Security Technical Implementation Guides) following close behind, at 60%, 55%, and 52% of respondents, respectively, citing these as the primary contributing factor in managing agency risks.

There’s also no question the tools and technologies to help reduce risk are advancing quickly; this was evidenced by the number of tools federal IT security pros rely on to ensure a stronger security posture within their agencies. The following are the tools cited, and the percentage of respondents saying these are their most important technologies in their proverbial tool chest:

•Intrusion detection and prevention tools 42%

•Endpoint and mobile security 34%

•Web application firewalls 34%

•Fire and disk encryption 34%

•Network traffic encryption 34%

•Web security or web content filtering gateways 33%

•Internal threat detection/intelligence 30%

Training was deemed the most important factor in reducing agency risk, particularly when it comes to reducing risks associated with contractors or temporary workers:

•53% cited “ongoing security training” as the most important factor

•49% cited “training on security policies when onboarding” as the most important factor

•44% cited “educate regular employees on the need to protect sensitive data” as the most important factor

Conclusion

Any federal IT security pro will tell you although things are improving, there’s no one answer or one solution. The most effective way to reduce risk is a combination of tactics, from implementing ever-improving technologies to meeting federal mandates to ensuring all staffers are trained in security best practices.

Find the full article on our partner DLT’s blog Technically Speaking.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 15 855
Level 17

This week's Actuator comes to you from the suddenly mild January here in the Northeast. I'm taking advantage of the warm and dry days up here, spending time walking outdoors. Being outdoors is far better than the treadmill at the gym.

As always, here's a bunch of links from the internet I hope you will find useful. Enjoy!

Jeff Bezos hack: Amazon boss's phone 'hacked by Saudi crown prince'

I don't know where to begin. Maybe we can start with the idea that Bezos uses WhatsApp, an app known to be unsecured and owned by the unsecured Facebook. I'm starting to think he built a trillion-dollar company by accident, not because he's smart.

New Ransomware Process Leverages Native Windows Features

This is notable, but not new. Ransomware often uses resources available on the machine to do damage. For example, VB macros embedded in spreadsheets. I don't blame Microsoft for saying they won't provide security service for this, but it would be nice if they could hint at finding ways to identify and halt malicious activity.

London facial recognition: Metropolitan police announces new deployment of cameras

Last week the EU was talking about a five-year ban on facial recognition technology. Naturally, the U.K. decides to double down on their use of that same tech. I can't help but draw the conclusion this shows the deep divide between the U.K. and the EU.

Security Is an Availability Problem

I'm not certain, but I suspect many business decision-makers tend to think "that can't happen to us," and thus fail to plan for the day when it does happen to them.

Apple's dedication to 'a diversity of dongles' is polluting the planet

Words will never express my frustration with Apple for the "innovation" of removing a headphone jack and forcing me to buy additional hardware to continue to use my existing accessories.

Webex flaw allowed anyone to join private online meetings - no password required

The last thing I'm doing during the day is trying to join *more* meetings.

Play Dungeons & Deadlines

You might want to set aside some time for this one.

Walking through Forest Park this past Sunday, after a rainstorm the day before and the temperature so perfect to catch the steam coming off the trees.

pastedImage_0.png

Read more
2 34 1,159
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by Jim Hansen about using patching, credential management, and continuous monitoring to improve security of IoT devices.

Security concerns over the Internet of Things (IoT) are growing, and federal and state lawmakers are taking action. First, the U.S. Senate introduced the Internet of Things Cybersecurity Improvement Act of 2017, which sought to “establish minimum security requirements for federal procurements of connected devices.” More recently, legislators in the state of California introduced Senate Bill No. 327, which stipulated manufacturers of IoT devices include “a reasonable security feature” within their products.

While these laws are good starting points, they don’t go far enough in addressing IoT security concerns.

IoT Devices: A Hacker’s Best Friend?

Connected devices all have the potential to connect to the internet and local networks and, for the most part, were designed for convenience and speed—not security. And since they’re connected to the network, they offer a backdoor through which other solutions can be easily compromised.

As such, IoT devices offer tantalizing targets for hackers. A single exploit from one connected device can lead to a larger, more damaging breach. Remember the Target hack from a few years ago? Malicious attackers gained a foothold into the retail giant’s infrastructure by stealing credentials from a heating and air condition company whose units were connected to Target’s network. It’s easy to imagine something as insidious—and even more damaging to national security—taking place within the Department of Defense or other agencies, which has been an early adopter of connected devices.

Steps for Securing IoT Devices

When security managers initiate IoT security measures, they’re not only protecting their devices, they’re safeguarding everything connected to those devices. Therefore, it’s important to go beyond the government’s baseline security recommendations and embrace more robust measures. Here are some proactive steps government IT managers can take to lock down their devices and networks.

  • Make patching and updating a part of the daily routine. IoT devices should be subject to a regular cadence of patches and updates to help ensure the protection of those devices against new and evolving vulnerabilities. This is essential to the long-term security of connected devices.

The Internet of Things Cybersecurity Improvement Act of 2017 specifically requires vendors to make their IoT devices patchable, but it’s easy for managers to go out and download what appears to be a legitimate update—only to find it’s full of malware. It’s important to be vigilant and verify security packages before applying them to their devices. After updates are applied, managers should take precautions to ensure those updates are genuine.

  • Apply basic credential management to interaction with IoT devices. Managers must think differently when it comes to IoT device user authentication and credential management. They should ask, “How does someone interact with this device?” “What do we have to do to ensure only the right people, with the right authorization, are able to access the device?” “What measures do we need to take to verify this access and understand what users are doing once they begin using the device?”

Being able to monitor user sessions is key. IoT devices may not have the same capabilities as modern information systems, such as the ability to maintain or view log trails or delete a log after someone stops using the device. Managers may need to proactively ensure their IoT devices have these capabilities.

  • Employ continuous threat monitoring to protect against attacks. There are several common threat vectors hackers can use to tap into IoT devices. SQL injection and cross-site scripting are favorite weapons malicious actors use to target web-based applications and could be used to compromise connected devices.

Managers should employ IoT device threat monitoring to help protect against these and other types of intrusions. Continuous threat monitoring can be used to alert, report, and automatically address any potentially harmful anomalies. It can monitor traffic passing to and from a device to detect whether the device is communicating with a known bad entity. A device in communication with a command and control system outside of the agency’s infrastructure is a certain red flag that the device—and the network it’s connected to—may have been compromised.

The IoT is here to stay, and it’s important for federal IT managers to proactively tackle the security challenges it poses. Bills passed by federal and state legislators are a start, but they’re not enough to protect government networks against devices that weren’t designed with security top-of-mind. IoT security is something agencies need to take into their own hands. Managers must understand the risks and put processes, strategies, and tools in place to proactively mitigate threats caused by the IoT.

Find the full article on Fifth Domain.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 11 504
Level 17

Back from Austin and home for a few weeks before I head...back to Austin for a live episode of SolarWinds Lab. Last week was the annual Head Geeks Summit, and it was good to be sequestered for a few days with just our team as we map out our plans for world domination in 2020 (or 2021, whatever it takes).

As always, here's a bunch of stuff I found on the internetz this week that I think you might enjoy. Cheers!

Critical Windows 10 vulnerability used to Rickroll the NSA and Github

Patch your stuff, folks. Don't wait, get it done.

WeLeakInfo, the site which sold access to passwords stolen in data breaches, is brought down by the ...

In case you were wondering, the website was allowed to exist for three years before it was finally shut down. No idea what took so long, but I tip my hat to the owners. They didn't steal anything, they just took available data and made it easy to consume. Still, they must have known they were in murky legal waters.

Facial recognition: EU considers ban of up to five years

I can't say if that's the right amount of time; I'd prefer they ban it outright for now. This isn't just a matter of the tech being reliable, it brings about questions regarding basic privacy versus a surveillance state.

Biden wants Sec. 230 gone, calls tech “totally irresponsible,” “little creeps”

Politics aside, I agree with the idea that a website publisher should bear some burden regarding the content allowed. Similar to how I feel developers should be held accountable for deploying software that's not secure, or leaving S3 buckets wide open. Until individuals understand the risks, we will continue to have a mess of things on our hands.

Microsoft pledges to be 'carbon negative' by 2030

This is a lofty goal, and I applaud the effort here by Microsoft to erase their entire carbon footprint since they were founded in 1975. It will be interesting to see if any other companies try to follow, but I suspect some (*cough* Apple) won't even bother.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

In today's edition of "do as I say, not as I do", Google reminds us that their new motto is "Only slightly evil."

Technical Debt Is like a Tetris Game

I like this analogy, and thought you might like it as well. Let me know if it helps you.

If you are ever in Kansas City, run, don't walk, to Jack Stack and order the beef rib appetizer. You're welcome.

7_18_13 - 1.jpg

Read more
2 21 666
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Brandon Shopp with ideas for improving security at the DoD by finding vulnerabilities and continuously monitoring agency infrastructure.

An early 2019 report from the Defense Department Officer of Inspector General revealed how difficult it’s been for federal agencies to stem the tide of cybersecurity threats. Although the DoD has made significant progress toward bolstering its security posture, 266 cybersecurity vulnerabilities still existed. Most vulnerabilities have only been discovered within the past year—a sure sign of rising risk levels.

The report cited several areas for improvement, including continuous monitoring and detection processes, security training, and more. Here are three strategies DOD can use to tackle those remaining 200-plus vulnerabilities.

1. Identify Existing Threats and Vulnerabilities

Identifying and addressing vulnerabilities will become more difficult as the number of devices and cloud-based applications on defense networks proliferates. Although government IT managers have gotten a handle on bring-your-own-device issues, undetected devices are still used on DoD networks.

Scanning for applications and devices outside the control of IT is the first step toward plugging potential security holes. Apps like Dropbox and Google Drive may be great for productivity, but they could also expose the agency to risk if they’re not security hardened.

The next step is to scan for hard-to-find vulnerabilities. The OIG report called out the need to improve “information protection processes and procedures.” Most vulnerabilities occur when configuration changes aren’t properly managed. Automatically scanning for configuration changes and regularly testing for vulnerabilities can help ensure employees follow the proper protocols and increase the department’s security posture.

2. Implement Continuous Monitoring, Both On-Premises and in the Cloud

While the OIG report specifically stated the DoD must continue to proactively monitor its networks, those networks are becoming increasingly dispersed. It’s no longer only about keeping an eye on in-house applications; it’s equally as important to be able to spot potential vulnerabilities in the cloud.

DoD IT managers should go beyond traditional network monitoring and look more deeply into the cloud services they use. The ability to see the entire network, including destinations in the cloud, is critically important, especially as the DoD becomes more reliant on hosted service providers.

3. Establish Ongoing User Training and Education Programs

A well-trained user can be the best protection against vulnerabilities, making it important for the DoD to implement a regular training cadence for its employees.

Training shouldn’t be relegated to the IT team alone. A recent study indicates insider threats pose some of the greatest risk to government networks. As such, all employees should be trained on the agency’s policies and procedures and encouraged to follow best practices to mitigate potential threats. The National Institute of Standards and Technology provides an excellent guide on how to implement an effective security training program.

When it comes to cybersecurity, the DoD has made a great deal of progress, but there’s still room for improvement. By implementing these three best practices, the DoD can build off what it’s already accomplished and focus on improvements.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
2 13 894
Level 17

In Austin this week for our annual meeting of Head Geeks. The first order of business is to decide what to call our group. I prefer a "gigabyte of Geeks," but I continue to be outvoted. Your suggestions are welcome.

As always, here's a bunch of links from the internet I hope you find interesting. Enjoy!

Facebook again refuses to ban political ads, even false ones

Zuckerberg continues to show the world he only cares about ad revenue, for without that revenue stream his company would collapse.

Scooter Startup Lime Exits 12 Cities and Lays Off Workers in Profit Push

Are you saying renting scooters your customers then abandon across cities *is not* a profitable business model? That's crazy!

Russian journals retract more than 800 papers after ‘bombshell’ investigation

I wish we could do the same thing with blog posts, old and new.

Alleged head of $3.5M crypto mining scam bought stake in nightclub

A cryptocurrency scam? Say it isn't so! Who knew this was even possible?

Ring confirms it fired four employees for watching customer videos

Ah, but only after an external complaint, and *after* their actions were known internally. In other words, these four would still have jobs if not for the external probe.

Tesla driver arrested for flossing at 84 mph on autopilot

Don't judge, we've all been there, stuck in our car and in need of flossing our teeth.

It's helpful for a restaurant to publish their menu outside for everyone to see.

IEBYE5932.JPG

Read more
2 36 943
Level 17

Welcome back! I hope y'all had a happy and healthy holiday break. I'm back in the saddle after hosting a wonderful Christmas dinner for 20 friends and family. I had some time off as well, which I used to work a bit on my blog as well as some Python and data science learning.

As usual, here's a bunch of links from the internet I hope you'll find useful. Enjoy!

Team that made gene-edited babies sentenced to prison, fined

I wasn't aware we had reached the point of altering babies' DNA, but here we are.

2019 Data Breach Hall of Shame: These were the biggest data breaches of the year

I expect a longer list from 2020.

Bing’s Top Search Results Contain an Alarming Amount of Disinformation

A bit long, but worth some time and a discussion. I never think about how search engines try to determine the veracity of the websites returned in a search.

Google and Amazon are now in the oil business

File this under "Do as I say, not as I do."

Seven Ways to Think Like a Programmer

An essay about data that warmed my heart. I think a lot of this applies to every role, especially for those of us inside IT.

The other side of Stack Overflow content moderation

Start this post by reading the summary, then take in some of the specific cases he downvoted. The short of it is this: humans are horrible at communicating through texts, no matter what the forum.

This Is How To Change Someone’s Mind: 6 Secrets From Research

If you want to have more success at work, read this post. I bet you can think of previous discussions at work and understand where things went wrong.

For New Year's Eve I made something special - 6 pounds of pork belly bites in a honey soy sauce. They did not last long. No idea what everyone else ate, though.

IMG_3763.JPG

Read more
1 31 764
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Jim Hansen where he provides tips on leveraging automation to improve your cybersecurity, including deciding what to automate and what tools to deploy to help.

Automation can reduce the need to perform mundane tasks, improve efficiency, and create a more agile response to threats. For example, administrators can use artificial intelligence and machine learning to ascertain the severity of potential threats and remediate them through the appropriate automated responses. They can also automate scripts, so they don’t have to repeat the same configuration process every time a new device is added to their networks.

But while automation can save enormous amounts of time, increase productivity, and bolster security, it’s not necessarily appropriate for every task, nor can it operate unchecked. Here are four strategies for effectively automating network security within government agencies.

1. Earmark What Should—And Shouldn’t—Be Automated.

Setting up automation can take time, so it may not be worth the effort to automate smaller jobs requiring only a handful of resources or a small amount of time to manage. IT staff should also conduct application testing themselves and must always have the final say on security policies.

Security itself, however, is ripe for automation. With the number of global cyberattacks rising, the challenge has become too vast and complex for manual threat management. Administrators need systems capable of continually policing their networks, automatically updating threat intelligence, and monitoring and responding to potential threats.

2. Identify the Right Tools.

Once the strategy is in place, it’s time to consider which tools to deploy. There are several security automation tools available, and they all have different feature sets. Begin by researching vendors with a track record of government certifications, such as Common Criteria, or are compliant with the Defense Information Systems Agency requirements.

Continuous network monitoring for potential intrusions and suspicious activity is a necessity. Being able to automatically monitor log files and analyze them against multiple sources of threat intelligence is critical to being able to discover and, if necessary, deny access to questionable network traffic. The system should also be able to automatically implement predetermined security policies and remediate threats.

3. Augment Security Intelligence.

Artificial intelligence and machine learning should also be considered indispensable, especially as IT managers struggle to keep up with the changing threat landscape. Through machine learning, security systems can absorb and analyze data retrieved from past intrusions to automatically and dynamically implement appropriate responses to the latest threats, helping keep administrators one step ahead of hackers.

4. Remember Automation Isn’t Automatic.

The old saying “trust but verify” applies to computers as much as people. Despite the move toward automation, people are and will always be an important part of the process.

Network administrators must conduct the appropriate due diligence and continually audit, monitor and maintain their automated tasks to ensure they’re performing as expected. Updates and patches should be applied as they become available, for example.

Automating an agency’s security measures can be a truly freeing experience for time- and resource-challenged IT managers. They’ll no longer have to spend time tracking down false red flags, rewriting scripts, or manually attempting to remediate every potential threat. Meanwhile, they’ll be able to rest easy knowing the automated system has their backs and their agencies’ security postures have been improved.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 11 655
Level 17

I visited the Austin office this past week, my last trip to SolarWinds HQ for 2019. It’s always fun to visit Austin and eat my weight in pork products, but this week was better than most. I took part in deep conversations around our recent acquisition of VividCortex.

I can’t begin to tell you how excited I am for the opportunity to work with the VividCortex team.

Well, maybe I can begin to tell you. Let’s review two data points.

In 2013, SolarWinds purchased Confio Software, makers of Ignite (now known as Database Performance Analyzer, or DPA) for $103 million. That’s where my SolarWinds story begins, as I was included with the Confio purchase. I was with Confio since 2010, working as a sales engineer, customer support, product development, and corporate marketing. We made Ignite into a best of breed monitoring solution that’s now the award-winning, on-prem and cloud-hosted DPA loved by DBAs globally.

The second data point is from last week, when SolarWinds bought VividCortex for $117.5 million. One thing I want to make clear is SolarWinds just doubled down on our investment in database performance monitoring. Anyone suggesting anything otherwise is spreading misinformation.

Through all my conversations last week with members of both product teams one theme was clear. We are committed to providing customers with the tools necessary to achieve success in their careers. We want happy customers. We know customer success is our success.

Another point that was made clear is the VividCortex product will complement, not replace DPA, expanding our database performance monitoring portfolio in a meaningful way. Sure, there is some overlap with MySQL, as both tools offer support for that platform. But the tools have some key differences in functionality. Currently, VividCortex is a SaaS monitoring solution for popular open-source platforms (PostgreSQL, MySQL, MongoDB, Amazon Aurora, and Redis). DPA provides both monitoring and query performance insights for traditional relational database management systems and is not yet available as a SaaS solution.

This is why we view VividCortex as a product to enhance what SolarWinds already offers for database performance monitoring. We’re now stronger this week than we were just two weeks ago. And we’re now poised to grow stronger in the coming months.

This is an exciting time to be in the database performance monitoring space, with 80% of workloads still Earthed. If you want to know about our efforts regarding database performance monitoring products, just AMA.

I can't wait to get started on helping build next-gen database performance monitoring tools. That’s what VividCortex represents, the future for database performance monitoring, and why this acquisition is so full of goodness. Expect more content in the coming weeks from me regarding our efforts behind the scenes with both VividCortex and DPA.

Read more
4 9 597
Level 17

I hope this edition of the Actuator finds you and yours in the middle of a healthy and happy holiday season. With Christmas and New Year's falling on Wednesday, I'll pick this up again in 2020. Until then, stay safe and warm.

As always, here's a bunch of stuff I found on the internet I thought you might enjoy.

Why Car-Free Streets Will Soon Be the Norm

I'm a huge fan of having fewer cars in the middle of any downtown city. I travel frequently enough to European cities and I enjoy the ability to walk and bike in areas with little worry of automobiles.

Microsoft and Warner Bros trap Superman on glass slide for 1,000 years

Right now, one of you is reading this and wondering how to monitor glass storage and if an API will be available. OK, maybe it's just me.

The trolls are organizing—and platforms aren't stopping them

This has been a problem with online communities since they first started; it's not a new problem.

New Orleans declares state of emergency following cyberattack

Coming to a city near you, sooner than you may think.

Facebook workers' payroll data was on stolen hard drives

"Employee wasn’t supposed to take hard drives outside the office..." Security is hard because people are dumb.

A Sobering Message About the Future at AI's Biggest Party

The key takeaway here is the discussion around how narrow the focus is for specific tasks. Beware the AI snake oil salesman promising you their algorithms and models work for everyone. They don't.

12 Family Tech Support Tips for the Holidays

Not a bad checklist for you to consider when your relatives ask for help over the holidays.

Yes, I do read books about bacon. Merry Christmas, Happy Holidays, and best wishes.

Read more
1 30 560
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Mav Turner with ideas for improving the management of school networks by analyzing performance and leveraging alerts and capacity planning.

Forty-eight percent of students currently use a computer in school, while 42% use a smartphone, according to a recent report by Cambridge International. These technologies provide students with the ability to interact and engage with content both inside and outside the classroom, and teachers with a means to provide personalized instruction.

Yet technology poses significant challenges for school IT administrators, particularly with regard to managing network performance, bandwidth, and cybersecurity requirements. Many educational applications are bandwidth-intensive and can lead to network slowdowns, potentially affecting students’ abilities to learn. And when myriad devices are tapping into a school’s network, it can pose security challenges and open the doors to potential hackers.

School IT administrators must ensure their networks are optimized and can accommodate increasing user demands driven by more connected devices. Simultaneously, they must take steps to lock down network security without compromising the use of technology for education. And they must do it all as efficiently as possible.

Here are a few strategies they can adopt to make their networks both speedy and safe.

Analyze Network Performance

Finding the root cause of performance issues can be difficult. Is it the application or the network?

Answering this question correctly requires the ability to visualize all the applications, networks, devices, and other factors affecting network performance. Administrators should be able to view all the critical network paths connecting these items, so they can pinpoint and immediately target potential issues whenever they arise.

Unfortunately, cloud applications like Google Classroom or Office 365 Education can make identifying errors challenging because they aren’t on the school’s network. Administrators should be able to monitor the performance of hosted applications as they would on-premises apps. They can then have the confidence to contact their cloud provider and work with them to resolve the issue.

Rely on Alerts

Automated network performance monitoring can save huge amounts of time. Alerts can quickly and accurately notify administrators of points of failure, so they don’t have to spend time hunting; the system can direct them to the issue. Alerts can be configured so only truly critical items are flagged.

Alerts serve other functions beyond network performance monitoring. For example, administrators can receive an alert when a suspicious device connects to the network or when a device poses a potential security threat.

Plan for Capacity

A recent report by The Consortium for School Networking indicates within the next few years, 38% of students will use, on average, two devices. Those devices, combined with the tools teachers are using, can heavily tax network bandwidth, which is already in demand thanks to broadband growth in K-12 classrooms.

It’s important for administrators to monitor application usage to determine which apps are consuming the most bandwidth and address problem areas accordingly. This can be done in real-time, so issues can be rectified before they have an adverse impact on everyone using the network.

They should also prepare for and optimize their networks to accommodate spikes in usage. These could occur during planned testing periods, for example, but they also may happen at random. Administrators should build in bandwidth to accommodate all users—and then add a small percentage to account for any unexpected peaks.

Tracking bandwidth usage over time can help administrators accurately plan their bandwidth needs. Past data can help indicate when to expect bandwidth spikes.

Indeed, time itself is a common thread among these strategies. Automating the performance and optimization of a school network can save administrators from having to do all the maintenance themselves, thereby freeing them up to focus on more value-added tasks. It can also save schools from having to hire additional technical staff, which may not fit in their budgets. Instead, they can put their money toward facilities, supplies, salaries, and other line items with a direct and positive impact on students’ education.

Find the full article on Today’s Modern Educator.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
2 10 430
Level 17

Good morning! By the time you read this post, the first full day of Black Hat in London will be complete. I share this with you because I'm in London! I haven't been here in over three years, but it feels as if I never left. I'm heading to watch Arsenal play tomorrow night, come on you gunners!

As always, here's a bunch of links I hope you find interesting. Cheers!

Hacker’s paradise: Louisiana’s ransomware disaster far from over

The scary part is that the State of Louisiana was more prepared than 90% of other government agencies (HELLO BALTIMORE!), just something to think about as ransomware intensifies.

How to recognize AI snake oil

Slides from a presentation I wish I'd created.

Now even the FBI is warning about your smart TV’s security

Better late than never, I suppose. But yeah, your TV is one of many security holes found in your home. Take the time to help family and friends understand the risks.

A Billion People’s Data Left Unprotected on Google Cloud Server

To be fair, it was data curated from websites. In other words, no secrets were exposed. It was an aggregated list of information about people. So, the real questions should now focus on who created such a list, and why.

Victims lose $4.4B to cryptocurrency crime in first 9 months of 2019

Crypto remains a scam, offering an easy way for you to lose real money.

Why “Always use UTC” is bad advice

Time zones remain hard.

You Should Know These Industry Secrets

Saw this thread in the past week and many of the answers surprised me. I thought you might enjoy them as well.

You never forget your new Jeep's first snow.

jeepsnow.jpg

Read more
0 31 523
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Jim Hansen about improving security by leveraging the phases of the CDM program and enhancing data protection by taking one step at a time.

The Continuous Diagnostics and Mitigation (CDM) Program, issued by the Department of Homeland Security (DHS), goes a long way toward helping agencies identify and prioritize risks and secure vulnerable endpoints.

How can a federal IT pro more effectively improve an agency’s endpoint and data security? The answer is multi-fold. First, incorporate the guidance provided by CDM into your cybersecurity strategy. Secondly, and in addition to CDM—develop a data protection strategy for an Internet of Things (IoT) world.

Discovery Through CDM

According to Cybersecurity and Infrastructure Security Agency (CISA), the DHS sub-agency that has released CDM, the program “provides…Federal Agencies with capabilities and tools to identify cybersecurity risks on an ongoing basis, prioritize these risks based on potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first.”

CDM takes federal IT pros through four phases of discovery:

What’s on the network? Here, federal IT pros discover devices, software, security configuration settings, and software vulnerabilities.

Who’s on the network? Here, the goal is to discover and manage account access and privileges; trust determination for users granted access; credentials and authentication; and security-related behavioral training.

What’s happening on the network? This phase discovers network and perimeter components; host and device components; data at rest and in transit; and user behavior and activities.

How is data protected? The goal of this phase is to identify cybersecurity risks on an ongoing basis, prioritize these risks based upon potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first.

Enhanced Data Protection

A lot of information is available about IoT-based environments and how best to secure that type of infrastructure. In fact, there’s so much information it can be overwhelming. The best course of action is to stick to three basic concepts to lay the groundwork for future improvements.

First, make sure security is built in from the start as opposed to making security an afterthought or an add-on. This should include the deployment of automated tools to scan for and alert staffers to threats as they occur. This type of round-the-clock monitoring and real-time notifications help the team react more quickly to potential threats and more effectively mitigate damage.

Next, assess every application for potential security risks. There are a seemingly inordinate number of external applications to track and collect data. It requires vigilance to ensure these applications are safe before they’re connected, rather than finding vulnerabilities after the fact.

Finally, assess every device for potential security risks. In an IoT world, there’s a whole new realm of non-standard devices and tools trying to connect. Make sure every device meets security standards; don’t allow untested or non-essential devices to connect. And, to be sure agency data is safe, set up a system to track devices by MAC and IP address, and monitor the ports and switches those devices use.

Conclusion

Security isn’t getting any easier, but there are an increasing number of steps federal IT pros can take to enhance an agency’s security posture and better protect agency data. Follow CDM guidelines, prepare for a wave of IoT devices, and get a good night’s sleep.

Find the full article on Government Technology Insider.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 7 369
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Brandon Shopp where he discusses the Army’s new training curriculum and the impacts on network and bandwidth.

The U.S. Army is undergoing a major technology shift affecting how soldiers prepare for battle. Core to the Army’s modernization effort is the implementation of a Synthetic Training Environment (STE) combining many different performance-demanding components, including virtual reality and training simulation software.

The STE’s One World Terrain (OWT) concept is comprised of five different phases from the initial point of data collection to final application. During phase four, data is delivered to wherever soldiers are training. Raw data is used to automatically replicate digital 3-D terrains, so soldiers can experience potential combat situations in virtual reality through the Army’s OWT platform before setting foot on a battlefield.

Making One World Terrain work

For the STE to work as expected, the Army’s IT team should consider implementing an advanced form of network monitoring focused specifically on bandwidth optimization. The Army’s objective with OWT is to provide soldiers with as accurate a representation of actual terrain as possible, right down to extremely lifelike road structures and vegetation. Transmitting so much information can create network performance issues and bottlenecks. IT managers must be able to continually track performance and usage patterns to ensure their networks can handle the traffic.

With this practice administrators may discover what areas can be optimized to accommodate the rising bandwidth needs presented by the OWT. For example, their monitoring may uncover other applications, outside of those used by the STE, unnecessarily using large amounts of bandwidth. They can shut those down, limit access, or perform other tasks to increase their bandwidth allocation, relieve congestion, and improve network performance, not just regarding STE resources but across the board.

Delivering a Consistent User Experience

But potential hidden components found in every complex IT infrastructure could play havoc with the network’s ability to deliver the desired user experience. There might be multiple tactical or common ally networks, ISPs, agencies, and more, all competing for resources and putting strain on the system. Byzantine application stacks can include solutions from multiple vendors, not all of which may play nice with each other. Each of these can create their own problems, from server errors to application failures, and can directly affect the information provided to soldiers in training.

To ensure a consistent and reliable experience, administrators should take a deep dive into their infrastructure. Monitoring database performance is a good starting point because it allows teams to identify and resolve issues causing suboptimal performance. Server monitoring is also ideal, especially if it can monitor servers across multiple environments, including private, public, and hybrid clouds.

These practices should be complemented with detailed application monitoring to provide a clear view of all the applications within the Army’s stack. Stacks tend to be complicated and sprawling, and when one application fails, the others are affected. Gaining unfettered insight into the performance of the entire stack can ward off problems that may adversely affect the training environment.

Through Training and Beyond

These recommendations can help well beyond the STE. The Army is clearly a long way from the days of using bugle calls, flags, and radios for communication and intelligence. Troops now have access to a wealth of information to help them be more intelligent, efficient, and tactical, but they need reliable network operations to receive the information. As such, advanced network monitoring can help them prepare for what awaits them in battle—but it can also support them once they get there.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 7 354
Level 10

It’s time to stop seeing IT as just a black hole to throw money towards and instead, give back to the business. Often, the lack of proper measurement and standardization are the problem, so how do we address this?

We’ve all tried to leverage more funds from the organization to undertake a project. Whether replacing aging hardware or out-of-support software, without the business seeing the value the IT department brings, it’s hard to obtain the financing required. That’s because the board typically sees IT as a cost center.

A cost center is defined as a department within a business not directly adding profit, but still requiring money to run. A profit center is the inverse, whereby its operation directly adds to overall company profitability.

But how can you get there? You need to understand everything the IT department does is a process, and thereby can be measured and thus improved upon. Combining people and technology, along with other assets, and passing them through a process to deliver customer success or business results is a strategic outcome you want to enhance. If you can improve the technology and processes, then you’ll increase the strategic outcomes.

By measuring and benchmarking these processes, you can illustrate the increases made due to upgrades. Maybe a web server can now support 20% more traffic, or its loading latency has been reduced by four seconds, which over a three-month period has led to an increase of 15% web traffic and an 8% rise in sales. While I’ve just pulled these figures from the air, the finance department evaluates spending and can now see a tangible return on the released funds. The web server project increased web traffic, sales, and therefore profits. When you approach the next project you want to undertake, you don’t have to just say “it’s faster/newer/bigger than our current system” (although you should add to your discussion the faster, newer, bigger piece of hardware or software will provide an improvement to the overall system). However, without data, you have no proof, and the proof is in the pudding, as they say. Nothing will make a CFO happier than seeing a project with milestones and KPIs (well, maybe three quarters exceeding predictions). So, how do we measure and report all these statistics?

If you think of deployments in terms of weeks or months, you’re trying to deploy something monolithic yet composed of many moving and complex parts. Try to break this down. Think of it like a website. You can update one page without having to do the whole site. Then you start to think “instead of the whole page, what about changing a .jpg, or a background?” Before long, you’ve started to decouple the application at strategic points, allowing for independent improvement. At this stage, I’d reference the Cloud Native Computing Foundation Trail Map as a great way to see where to go. Their whole ethos on empowering organizations running modern scalable applications can help you with transformation.

But we’re currently looking at the measurement aspect of any application deployment. I’m not just talking about hardware or network monitoring, but a method of obtaining baselines and peak loads, and of being able to predict when a system will reach capacity and how to react to it.

Instead of being a reactive IT department, you suddenly become more proactive. Being able to assign your ticketing system to your developers allows them to react faster to any errors from a code change and quickly fix the issue or revert to an earlier deployment, thereby failing quickly and optimizing performance.

I suggest if you’re in charge of an application or applications, or support one on a daily basis, start to measure and record anywhere you can on the full stack. Understand what normal looks like, or how it’s different from “rush hour,” so you can say with more certainty it’s not the network. Maybe it’s the application, or it’s the DNS (it’s always DNS) leading to delays, lost revenue, or worse, complete outages. Prove to the naysayers in your company you have the correct details and that 73.6% of the statistics you present aren’t made up.

Read more
0 14 372
Level 17

I am back in Orlando this week for Live 360, where I get to meet up with 1,100 of my close personal data friends. If you're attending this event, please find me--I'm the tall guy who smells like bacon.

As always, here are some links I hope you find interesting. Enjoy!

Google will offer checking accounts, says it won’t sell the data

Because Google has proved itself trustworthy over the years, right?

Google Denies It’s Using Private Health Data for AI Research

As I was just saying...

Automation could replace up to 800 million jobs by 2035

Yes, the people holding those jobs will transition to different roles. It's not as if we'll have 800 million people unemployed.

Venice floods: Climate change behind highest tide in 50 years, says mayor

I honestly wouldn't know if Venice was flooded or not.

Twitter to ban all political advertising, raising pressure on Facebook

Your move, Zuck.

California man runs for governor to test Facebook rules on lying

Zuckerberg is doubling down with his stubbornness on political ads. That's probably because Facebook revenue comes from such ads, so ending them would kill his bottom line.

The Apple Card Is Sexist. Blaming the Algorithm Is Proof.

Apple, and their partners, continue to lower the bar for software.

Either your oyster bar has a trough or you're doing it wrong. Lee & Rick's in Orlando is a must if you are in the area.

leeandrick.jpg

Read more
0 33 728
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Brandon Shopp with ideas for protecting IT assets in battlefield situations.

“Dominance” and “protection” sum up the Defense Department’s goals as U.S. armed forces begin to modernize their networks and communications systems. DOD is investing significant resources in providing troops with highly advanced technology so they can effectively communicate with each other and allies in even the harshest environments.

Efforts like the Army’s ESB-E tactical network initiative, for example, represent an attempt to keep warfighters constantly connected through a unified communications network. These solutions will be built off more scalable, adaptable, and powerful platforms than those provided by older legacy systems.

Programs like ESB-E are being designed to provide wide-scale communications in hostile territory. It will be incumbent upon troops in the field to monitor, manage, and secure the network to fulfill the “protection” part of DOD’s two-fisted battlefield domination strategy.

Moving forward to take this technological hill, DOD should keep these three considerations in mind.

  1. 1. The Attack Surface Will Increase Exponentially

Over the years, the battlefield has become increasingly kinetic and dependent upon interconnected devices and even artificial intelligence. The Army Research Laboratory calls this the internet of battlefield things—a warzone with different points of contact ultimately resulting in everything and everyone being more connected and, thus, intelligent.

The Pentagon is looking to take the concept as far as possible to give warfighters a tactical and strategic edge. For example, the Army wants to network soldiers and their weapons systems, and the Navy plans to link its platforms across hundreds of ships.

Opening these communication channels will significantly increase the potential attack surface. The more connection points, the greater the threat of exposure. Securing a communications system of such complexity will prove to be a far more daunting challenge than what’s involved in monitoring and managing a traditional IT network. Armed forces must be prepared to monitor, maintain, and secure the entire communications system.

  1. 2. Everyone Must Have Systems Expertise

The line between soldiers and system administrators has blurred as technology has advanced into the battlefield. As communications systems expand, all service members must be able to identify problems to ensure both unimpeded and uninterrupted communications and the security of the information being exchanged.

All troops must be bought into the concept of protecting the network and its communications components and be highly skilled in managing and maintaining these technologies. This is particularly important as communications solutions evolve.

Soldiers will need to quickly secure communications tools if they’re compromised, just as they would any other piece of equipment harboring sensitive information or access points. And they will require clear visibility into the entirety of the network to be able to quickly pinpoint any anomalies.

  1. 3. Staff Must Increase Commensurate to the Size of the Task

The armed forces must bulk up on staff to support these expansive modern communications systems. Fortunately, the military has a wealth of individuals with network and systems administration experience. Unfortunately, they lack in other critical areas.

Security specialists remain in high demand, but the cybersecurity workforce gap is real, even in the military. The White House’s National Cyber Strategy offers some good recommendations, including reskilling workers from other disciplines and identifying and fostering new talent. The actions highlighted in the plan coalesce with DOD’s need to fortify and strengthen its cybersecurity workforce as it turns its focus toward relentlessly winning the battlefield communications war.

Whoever wins this war will truly establish dominance over air, land, sea, and cyberspace. Victory lies in educating and finding the right personnel to protect information across what will undoubtedly be a wider and more attractive target for America’s adversaries.

Find the full article on Government Computer News.

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 10 397
Level 10

In the IT industry, you’ll hear “I’ll sell you a DevOps; how much is it worth?” But the joke’s on you because you can’t sell (or buy) DevOps, as it is, in fact, an intangible entity. It’s a business process combining software development (Dev) and IT management processes (Ops) with the aim of helping teams understand what goes into making and maintaining applications and business processes. All this happens while working as a team to improve the overall performance and stability of said apps and processes rather than “chucking it over the fence” once your department’s piece of the puzzle is finished.

DevOps is often referred to as a journey, and you probably need to pass several milestones before you could consider your company a DevOps house. Several of the major milestones stem from the idea of adopting a blue/green method of deployment, in which you deploy a new version of your code (blue) running alongside the current version (green) and slowly move production traffic over to the new blue deployment while monitoring the application to see if improvements have been made. Once all the traffic is running on the blue version, you can stage the next change on the green environment. If the blue deployment is a detriment to the application, it’s backed out and all traffic reverts to the current green version.

A key part of the above blue/green deployment is a methodology of continuous integration and continuous deployment (CI/CD), whereby minor improvements are always being undertaken with the goal of optimizing the software and the hardware it runs on. To get to this point you need to make sure you have a system in place to continuously deploy to production, as well as a platform for continual testing. Your QA processes need to tackle everything from user integration to vulnerability testing and change management, and since you don’t want to have to be hunting around finding IP addresses or resource pools to run it on, automation is going to be key.

As you move towards CI/CD adoption rather than separate coding and testing phases, you begin to test as the code is being written. In turn, you’ll start to automate this testing and eventual movement into production, which is referred to as a deployment pipeline. Finally, you’ll also need a more detailed way of performance monitoring, hardware monitoring, software monitoring, and logging. With performance monitoring, it’s no longer good enough to look at network latency—you need to have a way to understand the performance process, including the IO to an application stack, the amount of code commits and bugs identified, the vulnerabilities being handled, and the environment’s health status. With so many moving parts, you’ll also need something to ingest the logs and give you greater insights and analysis to your environment.

But for all this to be undertaken, the first and possibly most major hurdle you’ll have to clear is the cultural shift within the organization. Willingness to cooperate truthfully and honestly as well as making failure less expensive is at the core of this shift. This cultural move must be led from the top down within the company. Making IT ops, software development, and security stop pointing the finger at each other and understand they all have a shared responsibility in the other departments’ undertaking can be a challenge, but if they’re properly incentivized and understand the overall goal, this shift can be a smoother process for an organization.

This building of the correct foundation as per the above milestones allows you thus to move from getting started into the five stages of DevOps evolution: Normalization, Standardization, Expansion, Automated Infrastructure Delivery, and Self-Service. Companies moving into the Normalization stage adhere to true agile methods, and the speed at which they invoke changes begins to increase, so with time they’re no longer hanging around like a loris, taking days or weeks to patch critical vulnerabilities, but move and adapt with the speed of a peregrine falcon.

In the recent Puppet 2019 State of DevOps report, they try to raise the idea of improving your security stance by moving through the five stages of evolution so you can adapt quickly to vulnerabilities. For instance, about 7% of those surveyed can respond within an hour. Those organizations with fully integrated security practices have the highest levels of DevOps evolution. This evolution, in turn, will let you soar through the clouds.

Read more
1 12 380
Level 17

Home this week and getting ready for Microsoft Ignite next week in Orlando. If you're at Ignite, please stop by the booth and say hello. I love talking data with anyone.

As always, here's a bunch of links I found interesting. Enjoy!

Microsoft beats Amazon to win the Pentagon’s $10 billion JEDI cloud contract

The most surprising part of this is an online bookstore thought they were the frontrunner. This deal underscores the difference between an enterprise software company with a cloud, and an enterprise infrastructure hosting company that also sells books.

Google claims it has achieved 'quantum supremacy' – but IBM disagrees

You mean Google would embellish upon facts to make themselves look better? Color me shocked.

Amazon migrates more than 100 consumer services from Oracle to AWS databases

"Amazon doesn't run on Oracle; why should you?"

“BriansClub” Hack Rescues 26M Stolen Cards

Counter-hacking is a thing. Expect to see more stories like this one in the coming years.

Berkeley City Council Unanimously Votes to Ban Face Recognition

Until the underlying technology improves, it's best for us to disallow the use of facial recognition for law enforcement purposes.

China’s social credit system isn’t about scoring citizens — it’s a massive API

Well, it's likely both, and a possible surveillance system. But if it keeps jerks away from me when I travel, I'm all for it.

Some Halloween candy is actually healthier than others

Keep this in mind when you're enforcing the Dad Tax on your kid's candy haul tomorrow night.

Every now and then my fire circle regresses to its former life as a pool.

water-circle.jpg

Read more
1 47 1,037
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Mav Turner with suggestions on improving your agency’s FITARA score. FITARA rolls up scores from other requirements and serves to provide a holistic view of agency performance.

The most recent version of the scorecard measuring agency implementation of the Federal IT Acquisition Reform Act gave agencies cause for both celebration and concern. On the whole, scores in December’s FITARA Scorecard 7.0 rose, but some agencies keep earning low scores.

Agencies don’t always have the appropriate visibility into their networks to allow them to be transparent. All agencies should strive for better network visibility. Let’s look at how greater visibility can help improve an agency’s score and how DevOps and agile approaches can propel their modernization initiatives.

Software Licensing

Agencies with the lowest scores in this category failed to provide regularly updated software licensing inventories. This isn’t entirely surprising; after all, when licenses aren’t immediately visible, they tend to get forgotten or buried as a budget line item. Out of sight, out of mind.

However, the Making Electronic Government Accountable by Yielding Tangible Efficiencies Act (MEGABYTE Act) of 2016 is driving agencies to make some changes. MEGABYTE requires agencies to establish comprehensive inventories of their software licenses and use automated discovery tools to gain visibility into and track them. Agencies are also required to report on the savings they’ve achieved by optimizing their software licensing inventory.

Even if an agency doesn’t have an automated suite of solutions, it can still assess their inventory. This can be a great exercise for cleaning house and identifying “shelfware,” software purchased but no longer being used.

Risk Management

Risk management is directly tied to inventory management. IT professionals must know what applications and technologies comprise their infrastructures. Obtaining a complete understanding of everything within those complex networks can be daunting, but there are solutions to help.

Network and inventory monitoring technologies can give IT professionals insight into the different components affecting their networks, from mobile devices to servers and applications. They can use these technologies to monitor for potential intrusions and threats, but also to look for irregular traffic patterns and bandwidth issues.

Data Center Optimization

Better visibility can also help IT managers identify legacy applications to modernize. Knowing which applications are being used is critical to being able to determine which ones should be removed and where to focus modernization efforts.

Unfortunately, agencies discover they still need legacy solutions to complete certain tasks. They get stuck in a vicious circle where they continue to add to, not reduce, their data centers. Their FITARA scores end up reflecting this struggle.

Applying a DevOps approach to modernization can help agencies achieve their goals. DevOps is often based on agile development practices enabling incremental improvements in short amounts of time; teams see what they can realistically get done in three to five weeks. They prioritize the most important projects and strive for short-term wins. This incremental progress can build momentum toward longer-term goals, including getting all legacy applications offline and reducing costly overhead.

While visibility and transparency are essential for improvements across all these categories, FITARA scorecards themselves are also useful for shining light on the macro problems agencies face today. They can help illuminate areas of improvement, so IT professionals can prioritize their efforts and make a significant difference to their organizations. Every government IT manager should stay up-to-date on the scoring methodologies and how other agencies are doing.

Find the full article on Government Computer News.

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 10 488
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Jim Hansen about the benefits and challenges of edge computing. Ultimately, this new technology requires scrutiny and planning.

Edge computing is here to stay and it’s no wonder. Edge computing provides federal IT pros with a range of advantages they simply don’t have with more traditional computing environments.

First, edge computing brings memory and computing power closer to the source of data, resulting in faster processing times, lower bandwidth requirements, and improved flexibility. Edge computing can be a source of potential cost savings. With edge computing, data is processed in real time at the edge devices, therefore, it can help save computing cycles on cloud servers and reduce bandwidth requirements.

However, edge computing may also introduce its share of challenges. Among the greatest challenges are visibility and security, based on the decentralized nature of edge computing.

Strategize

As with any technology implementation, start with a strategy. Remember, edge devices are considered agency devices, not cloud devices, therefore they’re the responsibility of the federal IT staff.

Include compliance and security details in the strategy, as well as configuration management. Create thorough documentation. Standardize wherever possible to enhance consistency and ease manageability.

Visualization and Security

Remember, accounting for all IT assets includes edge-computing devices, not just those devices in the cloud or on-premises. Be sure to choose a tool to not only monitors remote systems, but provides automated discovery and mapping, so you have a complete understanding of all edge devices.

In fact, consider investing in tools with full-infrastructure visualization, so you can have a complete picture of the entire network at all times. Network, systems, and cloud management and monitoring tools will optimize results and provide protection across the entire distributed environment.

To help strengthen security all the way out to edge devices, be sure all data is encrypted and patch management is part of the security strategy. Strongly consider using automatic push update software to ensure software stays current and vulnerabilities are addressed in a timely manner. This is an absolute requirement for ensuring a secure edge environment, as is an advanced Security Information and Event Management (SIEM) tool to ensure compliance while mitigating potential threats.

A SIEM tool will also assist with continuous monitoring, which helps federal IT pros maintain an accurate picture of the agency’s security risk posture, providing near real-time security status. This is particularly critical with edge-computing devices which can often go unsecured.

Conclusion

The distributed nature of edge computing technology is increasing in complexity, with more machines, greater management needs, and a larger attack surface.

Luckily, as computing technology has advanced, so has monitoring and visualization technology, helping federal IT pros realize the benefits of edge computing without additional management or monitoring pains.

Find the full article on Government Technology Insider.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 10 571
Level 17

In Austin this week for THWACKcamp. I hope you're watching the event and reading this post later in the day. We tried a new format this year--I hope you enjoy what we built.

As always, here are some links I found interesting this week. Enjoy!

GitHub renews controversial $200,000 contract with ICE

“At GitHub, we believe in empowering developers around the world. We also believe in basic human rights, treating people with respect and dignity, and cold, hard, cash.”

NASA has a new airplane. It runs on clean electricity

I hope this technology doesn't take 30 years to come to market.

Revealed: the 20 firms behind a third of all carbon emissions

Maybe we need to work on electric projects for these companies instead.

WeWork expected to cut 500 tech roles

It seems every week there's another company collapsing under the weight of the absurdity of the business model.

Visa, MasterCard, Stripe, and eBay all quit Facebook’s Libra in one day

I don't understand why they were involved to begin with.

Linus Torvalds isn't concerned about Microsoft hijacking Linux

Microsoft is absolutely a different company. It's good to see Linus acknowledge this.

Elizabeth Warren trolls Facebook with 'false' Zuckerberg ad

Here's a thought - maybe don't allow any political ads on Facebook. That way we don't have to worry about what is real or fake. Of course that can't happen, because Facebook wants money.

The leaves have turned, adding some extra color to the fire circle.

fall.jpg

Read more
1 32 709