1 2 3 4 5 Previous Next

Geek Speak

2,798 posts

Happy 4th of July everyone! I hope everyone reading this is enjoying time well spent with family and friends this week. My plans include landscaping, fireworks, and meat sweats, not necessarily in that order.

 

As always, here are some links I hope you find interesting.

 

Your database is going to the cloud and it's not coming back

There's still time to brush up on cloud security, to minimize your risk for a data breach.

 

When It Comes to Ransomware, Air Gaps Are the Best Defense

Nice reminder about the 3-2-1 rule for backups. Three copies, two different media, and one stored offsite. If everyone followed that rule, there'd be no need to pay any ransom.

 

US generates more electricity from renewables than coal for first time ever

I'm a huge fan of renewable energy, and I hope to see this trend continue.

 

Former Equifax executive sentenced to prison for insider trading prior to data breach

But not for allowing the data breach to happen. We need stronger laws regarding responsibility for a data breach.

 

Fortune 100 passwords, email archives, and corporate secrets left exposed on unsecured Amazon S3 server

As I was just saying...

 

You’re Probably Complaining the Wrong Way

Sound advice for everyone, especially those of us in IT that need to get a message across to various teams.

 

14 Fantastic Facts About the Fourth of July

Some interesting pieces of trivia for you to digest along with your traditional holiday salmon dinner.

 

Happy Birthday America!

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Mav Turner with suggestions for building a battle-hardened network for the Army. The Army has been using our products to troubleshoot their networks for years, and our tools can help with Mav’s suggestions as well.

 

The U.S. Army is leading the charge on the military’s multidomain battle concept—but will federal IT networks enable this initiative or inhibit it?

 

The network is critical to the Army’s vision of combining the defense domains of land, air, sea, space and cyberspace to protect and defend against adversaries on all fronts. As Gen. Stephen Townsend, USA, remarked to AFCEA conference attendees earlier this year, the Army is readying for a future reliant on telemedicine, 3-D printing, and other technologies that will prove integral to multidomain operations. “The network needs to enable all that,” said Townsend.

 

But the Army’s network, as currently constituted, isn’t quite ready to effectively support this ambitious effort. In response, Maj. General Peter Gallagher, USA, director of the Army's Network Cross Functional Team, has called for a flat network that converges these disparate systems and is more dynamic, automated, and self-healing.

 

The Army has employed a three-part strategy to solve these challenges and modernize its network, but modernization can open up potential security gaps. As the Army moves forward with readying its network for multidomain battles, IT professionals will want to take several steps to ensure network operations remain rock solid and secure.

 

Identify and Sunset Outdated Technologies

 

The Army may want to consider identifying and sunsetting outdated legacy technologies that can introduce connectivity, compatibility, and security issues. Legacy technologies may not be easily integrated into the new network environment, which could pose a problem as the service consolidates its various network resources. They could slow down network operations, preventing troops from being able to access vital information in times of need. They could also introduce security vulnerabilities that may not be patchable.

 

Update and Test Security Procedures

 

It’s imperative that Army IT professionals maintain strong and consistent security analysis to ensure the efficacy of new network technologies. This is especially true during the convergence and integration phase, when security holes may be more likely to arise.

 

Consider utilizing real-world operational testing, event simulation, and red and blue team security operations. Networks are evolutionary, not revolutionary, and these processes should be implemented every time a new element is added to the network.

 

Monitor the Network, Inside and Out

 

IT professionals will need to strengthen their existing network monitoring capabilities to identify and remediate issues, from bottlenecks to breaches, quickly and efficiently. They will need to go beyond traditional network monitoring to adopt agile and scalable monitoring capabilities that can be applied across different domains to support different missions.

 

Look no further than the Army’s Command Post Computing Environment for an initiative requiring more robust monitoring than typical on-premises monitoring capabilities. Similarly, a network that enables multidomain operations will need to be just as reliable and secure as traditional networks, even though the demands placed on the network will most likely be far more intense than anything the Army is accustomed to handling.

 

For the multidomain concept to succeed, the Army needs a network that can enable the initiative. Building such a network starts with modernization and continues with deploying the necessary processes and technologies to ensure secure and reliable operations. Those are the core tenets of a network built to handle whatever comes its way, from land, air, sea, space, or cyberspace.

 

Find the full article on SIGNAL.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Binary Noise for Logs.  Photo modified from Pixabay.

Four score and one post ago, we talked about Baltimore’s beleaguered IT department, which is in the throes of a ransomware-related recovery.

 

Complicating the recovery mission is the fact that the city’s IT team didn't know when the systems were compromised initially. They knew when the systems went offline, but not if the systems were infected earlier. The IT team can’t go back and check a compromised system’s logs because ransomware rendered the infected computers inaccessible.

 

Anyone who has worked in IT operations knows logs can contain a wealth of valuable information. Logs are usually the first place you go to troubleshoot or detect problems. Logs are where you can find clues of security events. Commonly, though, you can end up having to sift through a lot of data to find the information you need.

 

In any ransomware or security attack, a centralized logging server or Syslog is an invaluable resource to trace back and correlate events across a plethora of servers and network devices. Aggregation and correlation are jobs for a SIEM.

 

All About Those SIEMs

 

SIEM is mostly mentioned as an acronym, not its extended form of Security Information and Event Management tool. SIEMs serve an essential role in security threat detection and commonly make up a valuable part of an organization’s defense-in-depth strategy.

 

SIEM tools also form the basis for many regulatory auditing requirements for PCI-DSS, HIPAA, and NIST security checks, as well as aid with threat detection.

 

In a video recording of a session at RSA 2018, a presenter asked the audience who was happy with their current SIEM. When no hands went up, the presenter quipped that maybe the happy people were in the next room.

 

If I were in that room, I wouldn’t raise my hand either. On a previous contract, our SIEM tool consumed terabytes upon terabytes of space. When it came to time to pull information, the application was slow and unresponsive. Checking the logs ourselves was a more efficient use of time. So, why did we do this? Our SIEM was a compliance checkbox.

 

Extending SIEMs With UEBA and SOAR

 

SIEMs are much more than compliance checkboxes. User and Entity Behavior Analytics (UEBA) and Security Orchestration, Automation, and Response (SOAR), when bundled with SIEMs, offer additional features to extend security event management features.

 

UEBAs look for normal and abnormal behavior for both users and entities to improve visibility across an organization. By using advanced policies and machine learning, UEBAs improve visibility to help protect against insider threats. However, like SIEMs, UEBAs may require fine-tuning to weed out the false positives.

 

SOARs, on the other hand, are designed to automate and respond to low-level security events or threats. SOARs can provide similar functionality to an Intrusion Detection System (IDS) or Intrusion Prevent System (IPS), without the manual intervention.

 

Conclusion

 

At the end of the day, SIEMs, SOARs, UEBAs, and other security tools can be challenging to configure and manage. It makes sense for organizations to begin outsourcing part of this responsibility. Also, you could argue that applications reliant on machine learning belong in cloud-like environments, where you could build large data lakes for additional analytics.

 

In traditional SIEMs, feeding in more information probably won’t result in a better experience. Without dedicated security analysts to fine-tune the data collected, many organizations struggle with unwieldy SIEMs. While it’s easy to blame the tool, in many cases, the people and processes contribute to the implementation woes. 

The title of this post says it all. In IT, we’ve said for years you need to take and test backups regularly to ensure your systems are being backed up correctly. There’s an adage in IT: “The only good backup is a tested backup.”

 

Why Do You Need Backups?

 

I bring this up because I have people coming to our consulting firm after some catastrophic event has occurred, and the first thing I ask for are the backups. Then the conversation usually gets awkward as they try to explain why there are no backups available. The reasons usually run the gamut from “we forgot to set up backups” to “the backup system ran out of space” to “the backup system failed months ago, and no one fixed it.”

 

Whatever the reason, the result is the same: there are no backups, there’s a critical failure, and no one wants to explain to the boss why they can’t recover the system. The system is down, and the normal recovery options aren’t available for one reason or another. In these cases, when there’s no backup available, the question becomes, “How critical is this to your business staying operational?” If it’s a truly critical part of your infrastructure, then the backups should be just as critical to the infrastructure as the system is. Those backups need to be taken and then tested to ensure the backup solution meets the needs of the company (and that the backups are being taken).

 

When planning the backups for a key system, the business owners need to be involved in setting the backup and recovery policies; after all, they’re responsible for the data. In IT terms, this is the Recovery Point Objective (RPO) and the Recovery Time Objective (RTO). In layman’s terms, these are how much data can the organization afford to lose and how long it takes to bring the system back online. These numbers can be anything they need to be, but the smaller they are, the larger the financial cost will be in completing this request. If the business wants an RPO and RTO of 0, and the business is willing to pay for it, then it’s IT's job to complete the request even if we don’t agree with it. And that means running test restores of those backups frequently, perhaps very frequently, as we need to ensure the backups we’re taking of the system are working.

 

Why Is It Important to Test Backups?

 

Testing backups should be done no matter what kind of backups are being taken. If you think you can skip on test restores because the backup platform reports the backup was successful, then you’re failing at taking backups. One of the mantras of IT is “trust but verify.” We trust that the backup software package did the backup it says it was going to do, but we verify it did the backup by testing it. If there’s a problem where the backup can’t be restored, it’s much better to find out when doing a test restore of the backup than when you need to restore the production system. If you wait until the production system needs to be restored to find out the backup failed, there’s going to be a lot of explaining to do as to why the system can’t be restored and what sort of impact that might have on the business—including, potentially, the company going out of business.

(Disclaimer: This post was co-written by Leon and Kevin and presents a point/counterpoint style view of Cisco Live US 2019. But without either calling the other “ignorant” or insulting their choice in quantity of romantic liaisons.)

 

Kevin M. SparenbergThis year, once again, it was my absolute pleasure to be part of the staff that attended Cisco Live! US. Part of my time was spent in the booth and the rest of my time was spent in various sessions to catch up on everything from the last year. If there’s one thing that really resonated with me this year, even more than previous, was how quickly IT is moving.
This year, in a flurry of last-minute activity, I discovered I would be attending Cisco Live. Because of its proximity to the Jewish holiday of Shavuot (which is most notable for commandment to consume vast quantities of cheesecake. I regret nothing.) I didn’t think I’d be attending. But I did, and I’m extremely happy the powers-that-be at SolarWinds helped make that happen. Because let me tell you, times, they are a-changing!Leon Adato
Kevin M. Sparenberg

At the SolarWinds® booth, we talked about all the greatest additions and updates to the portfolio of products. In the last year, SolarWinds has released 32 updates to existing products or completely new products. To say it’s been a busy year may just be a tad bit of an understatement. For booth visitors, we got to show off some of the new enhancements to our network management portfolio and retread some of the past favorites (looking at you here, NetPath).

It really should come as no surprise that “it’s been a busy year.” They’re all busy. You folks keep telling us about persistent problems and challenges, and that’s what we set out to fix. If you’ve been following these Cisco Live reports for even a couple of years, you know that this is the time when we release those solutions into the wild.

Leon Adato
Kevin M. SparenbergThis year, more than most, I spoke with more people whose title didn’t include the word “network.” On the surface this would seem odd, but more and more organizations are recognizing that networks aren’t what their end users interact with, it’s the highway on which those interactions happen. Having a blazing fast network is great but doesn’t help if you are connecting to a server which takes 1,500 milliseconds to even send back the first bit of data. If that happens, you’re still going to have a horrible experience. As a former network engineer, I know the network is always blamed first, but how often is it really a network problem?

From MS Ignite to VMWorld to Cisco Live, I’ve been noticing that the same people are at every show. I don’t mean the same EXACT people, like some weird convention groupies or something. I mean that rather than having strictly systems folks coming to Ignite, virtualization admins at VMWorld, and network engineers at Cisco Live, the people who visit are concerned with multiple (or all) areas of their environment. They’re as likely to talk about virtual database instances on vSANs as they are about load balancing or traditional route/switch.

Leon Adato

Day One

 

Kevin M. Sparenberg

But those that know us, know that like to have fun at these events. One of those things was #KiltedMonday, which is a tradition that the Monday of Cisco Live! US, attendees are encouraged to wear their kilts instead of the blasé pants. I added a kilt to my ensemble a few years ago, and I kept it strong this year. As good as it was to be able to show off my shins, it was good to speak with everyone who came up to the booth. But the best part, the absolute best part, was introducing some new SolarWinds people to the booth experience.

 

We had a handful of “raw” recruits this year. This wasn’t just their first Cisco Live, this was their first ever event with SolarWinds. Monday is always a mad dash for swag, new feature demos, and saying “hi” to old friends. All I can say is that the new recruits did a smashing job at handling the flow, pivoting on topics, and responding to questions. Day one is typically the most hectic and a good trial-by-fire. Not too much unlike starting a new job in IT.

Monday was still a holiday for me.

Leon Adato
Kevin M. Sparenberg

Status update: Voice is already scratchy. Must come up with a regiment so I don’t “talk” myself hoarse on the first day.

Day Two

 

This was my first day on the show floor, and I have to tell you there’s no convention like Cisco Live. It’s hard to describe the intensity of the folks who come to the booth, the level of depth we get to go into in our conversations and demos, and the sheer love—of the product, of the philosophy of monitoring, and of the team. I’ve taken to ask visitors to the booth what THEY think the “big story” of Cisco Live is. On Day 2, all anyone could talk about was the changes to Cisco Certification program. Immediately after the keynote, there were a lot of emotions… ahem… “passionately held and strongly worded opinions” across the convention floor, but by the end of the day the prevailing wisdom could be summarized as:

  1. It’s not changing immediately
  2. Most certifications aren’t going away
  3. This makes it easier for certain folks (especially CCIE’s) to re-up their cert

…and thus, calm was restored to the masses.

Leon Adato
Kevin M. Sparenberg

On day 2, we held a SolarWinds User Group™ at Roy’s Restaurant right next to the San Diego Conference Center. The venue was gorgeous with vistas overlooking the marina and fantastic Polynesian fusion food. The signature mai tais were not to be missed. At that event, Chris O’Brien, Group Product Manager for the SolarWinds Network Management portfolio talked about all the goodness we delivered the week before. Then it was up to me to talk about everything else that SolarWinds released in the last year. Needless to say, I didn’t cover everything, or I’d still be there talking. Part of SWUG™ is the presentations, but most of SWUG is the conversations between customers. The best, most important part of our community is you, the people who work with our solutions every day. I’m constantly amazed at how gracious and courteous you are with any newcomers to the THWACK® community. You share your expertise, tips, tricks, and more with anyone willing to ask. If I had the opportunity to shake hands with every member who has helped me over the years, I’d do it in a heartbeat and be humbled by the experience. This is just one reason that I feel this is the greatest community in IT. With day two wrapping, I was both exhausted and energized for the rest of the week.

Day Three

 

Kevin M. Sparenberg

The post-SWUG hangover is always a rough day. It’s not to say that it’s a true hangover, but the exhilaration of speaking with so many people can make the follow-up day feel just a little bit longer. Thankfully, there was a new wave of swag in the booth, which makes the customer-in-me always happy. Today’s “word” of the day was API and I don’t care how you pronounce it (cue Leon groaning at me). I find it very profound that technology is so cyclical. When we started, everything was command line interface only, then we moved onto a GUI, and now we’re back to command line. It’s just one of those oddities that I’ve picked up being in the industry for so many years. As new technology comes out, there’s a push, nearly a requirement, to make sure that you can attach to it via a programmable interface.

The big story from the floor was the way Cisco is adopting a “Meraki style” process to integrating new features into their product. What I mean by that is this: When you have Meraki wireless controllers, when a new feature comes out, the software updates in the cloud and BOOM! you’re in business. And honestly, this has been one of the big complaints of CLUS attendees for several years. They come to the show, get all hyped up on the latest tech (including DNA and intent-based networking), and then go back to their office, only to realize that the molecules they have on site (i.e., the physical gear) can’t handle it, and won’t be retired for several years. While the bean counters may complain about subscription-based models, the fact that it will get the new code into our shops is going to change the speed at which new technologies, protocols, and techniques are adopted.

Leon Adato

Day Four

 

Kevin M. Sparenberg

The last day on the show floor for the booth staff brought both sighs of relief and sadness at parting. I love talking to people, so the final day always makes me a little sad. I love hearing how everyone is handling their own IT infrastructure. No one has infinite time, so you’ll never get to work hands-on with every technology. Being able to hear what various attendees got from the event is always a high point. Yes, there were product announcements, new technologies, and enhancements abounded, but the real thing for me is the community that happens at events. Part of that is connecting with other people. Those people could be coworkers from another office, other IT professionals you’ve gone to for advice, or new people you meet for the first time. One of the ways that I leveraged this event to reconnect was to go to lunch with my friend and co-worker, Leon, where we discussed our personal findings on the event. That, in part, was the impetus of this post.

One attendee told me he thought that “the ubiquity of the story around ‘umbrella’” was the big story for the show. Which is awesome (because #SECURITY!) but it’s also amusing, because you can find “Introducing Umbrella” as a session at every Cisco Live (US, EUR, LATAM, and APAC) back to 2016. Even at SolarWinds, we know that sometimes you have to keep talking about an idea for a bit before people start to notice it (looking at you, AppStack™). The other story worth noting was just how busy the expo floor remained. We had folks coming into the booth asking serious questions, and looking for in-depth demonstrations, up to 30 minutes AFTER they closed the show down. Now THOSE are some dedicated IT professionals!

Leon Adato

The Completely Un-Necessary Summary

 

Kevin M. Sparenberg

At the end of the week, the biggest takeaway for me was the number of non-network people walking around Cisco Live. This concerned a handful of the booth staff, some of the other vendors, and many of the attendees, but not me. For me, this was the year when topics started shifting from an us vs. them to more of an us with them focus. Functional silos are great for training and specialization, but horrible for troubleshooting real-world issues. The number of systems administrators, security operations professionals, and monitoring engineers this year was up and that made me smile. Attendees inquiring about how complete stack monitoring can help them get to root causes (or at least to stop spinning their wheels chasing red herrings) was an absolute pleasure to hear. Even if the problem isn’t in your wheelhouse, ignoring it doesn’t make it go away. Good tools can help pinpoint the problem and get you back to happy.

Next up for me is the New York SWUG and I’m looking forward to connecting with people again at that event. Hopefully by that time, my voice will have returned to its pre-event levels. Cisco Live may be over for this year, but that doesn’t mean the lessons learned or the connections made will go away.

As Kevin and I both noted, Cisco Live is unlike any other show we attend. At the same time, this year was different in one notable way—while every other year the visitors to our booth were both steady (the booth really never got empty) and relentless (we’d usually be doing a demo and have one or more people queueing up for a demo after we were done); this year we saw distinctive ebbs and flows. At first I worried that SolarWinds had somehow lost its mojo, but then we saw that the rest of the expo floor was equally barren. By the third day, I was asking around for reasons. It turns out that there was a rumor going around, that if you badged into a session and left early, the RFID sensors would pick it up and you wouldn’t get credit for attendance. That kept butts in chairs when they might otherwise have used the time to talk to your favorite monitoring enthusiasts. We’ll have to wait until next year to see if that rumor continues to hold sway.

Speaking of next year, mark your calendar and start saving your pennies. Cisco Live returns to Las Vegas, running from May 30 – June 4. For those of you with a Jewish holiday calendar handy, that means it starts the day AFTER Shavuot. I’ll make sure I have plenty of leftover cheesecake on hand.

Leon Adato

Those are our thoughts. What was your experience?

The hybrid cloud makes everything better. It's helping drive down costs, it's making the software we use to run our data centers easier, and it's pushing innovation. The public cloud has stolen the headlines for years, and for good reasons. A lot of exciting innovations in the public cloud have been happening and will continue to happen. Even though innovation excels in the public cloud, we find ourselves drawn back to our private data centers because companies need access to the data in those private data centers. This data fuels the business. Traditional technology companies that have focused on private data centers have listened and are starting to adopt a hybrid methodology.

 

1. Hybrid Cloud Can Reduce Cost

Competition is a good thing. It pushes everyone to do their best, to listen to the customer's request, and provides choice for consumers. Choice lets consumers decide between two or more products. It allows consumers to evaluate features of multiple products and decide if the cost is worth the investment. Private data center customers have more choices today than ever before. Customers can save money by reducing their hardware footprint. Servers in a private data center are paid upfront and are provisioned based on a company's peak utilization. The cost is the same whether workloads are idle or at maximum capacity. Additional server utilization may be seasonal or is only needed during a specific timeframe. Hybrid cloud allows us to stop over-provisioning workloads on-premises and move that extra compute power to the cloud, thus helping reduce costs because of the new options the hybrid cloud provides.

 

2. Reduction in Complexity

Every IT shop wants a well-maintained infrastructure. We don't want servers to go down, switches to burn out, or applications to stop. There's a lot of moving parts when you own the whole stack. You have to worry about cooling the servers, powering equipment, protecting equipment from theft or damage—and this is just layer one. The public cloud manages everything at the physical layer, so that's one less thing IT groups have to worry about. The real advantage of reducing complexity is when you can offload everything below layer seven, and only worry about the application.

 

Public cloud and Software-as-a-Service companies provided this new way of allowing customers to access their applications and data, but without the need to maintain the underlying infrastructure. Traditional data center companies have taken note and are providing the same level of service. These vendors are now providing their services as a cloud-based offering where the control plane and infrastructure hardware is updated and maintained by them. Offloading software upgrades from the on-premises systems administrators and for the admins to not worry about 2 a.m. service calls because of a hardware failure. Isn't this what we're after? Trying to reduce the complexity of our environments.

 

3. A Platform for Business Innovations

Speed is the name of the game in today's business world. Getting new products and services out and into the customer's hands is everyone's goal today. Public cloud enables companies to test new services faster than ever before. Services in the cloud are ready to be consumed and can be tested with the swipe of a credit card. This allows companies to test new services, fail, and move on to test the next service faster. No more waiting for hardware to arrive and be provisioned for a single test that could last weeks, if not months. Instead, this time is used to test and develop new services.

 

Innovation doesn't always come in the form of technology. From a financial perspective, the cloud provides the ability to purchases services as an operating expense. On-prem equipment is usually purchased as a capital expense. Capital expenses (CapEx) are usually paid upfront and depreciate over multiple years. Operating expenses (OpEx) are used to run the business and are paid monthly or annually. Items classified as CapEx take longer to get approvals from management or require C-level approval because all the money is needed upfront for those purchases. When IT requests a purchase for hardware, the business needs all the money now. Public cloud is an OpEx consumption model, and data center vendors are helping customers move away from the CapEx model and into the OpEx model. Data center vendors are providing consumption-based models to on-prem data center customers and offering them a pay as you go model, which closely represents a cloud purchases model of paying for only what you use and gives the flexibility to consume additional resources during peak times, but for private data centers.

 

Conclusion

The rise of public cloud has given companies the ability to quickly test out new services to see if they meet desired requirements with little financial investment from the organization. A reduction in timing and ability to fail fast allows the company to find the right choice and get their own services to market faster. Linking the public cloud to the private data center so cloud services can access company data is the ideal architecture. This hybrid cloud model is helping people get excited about IT again. It's allowed for new services to be explored and offered to customers. The customers are getting better products, sometimes at a reduced price. With flexibility comes complexity. Moving our daily tasks to a more managed service offering will offer less complexity because there's less to worry about. These new options are forcing old technology companies to innovate and bring new offerings to their customers. These new offerings don't have to be technical—they can be new payment offerings. No matter how you look at it, the hybrid cloud just makes everything better. 

Home after a couple weeks on the road between Cisco Live! and Data Grillen. My next event is Microsoft Inspire, and if you're attending, please stop by the booth so we can talk data.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Meds prescriptions for 78,000 patients left in a database with no password

This is the second recent breach involving a MongoDB and underscores the need for consequences to those who continue to practice poor security methods. Until we see stiffer penalties to the individuals involved, you can expect those rockstar MongoDB dev teams to get new jobs and repeat all the same mistakes.

 

Florida City Pays $600,000 Ransom to Save Computer Records

Never, ever pay the ransom. There's no guarantee you get your files, and you become a target for others (because now they know you will pay). Also? Time to evaluate your security response plan regarding ransomware, especially if you're running older software. It's just a matter of time before Anton in Accounting clicks on that phishing link.

 

AMCA Files for Bankruptcy Following Data Breach

Nice reminder for everyone that the result of a breach is your company goes out of business. Life comes at you fast.

 

Machine Learning Doesn’t Introduce Unfairness—It Reveals It

Great post. Machine learning algorithms are not fair, because the data they use has inherent bias. And the machines are good at uncovering that bias. In some ways, we humans have built these code machines, and the result is we are looking at ourselves in the mirror.

 

Microsoft bans Slack and discourages AWS and Google Docs use internally

Because the free version of Slack doesn't meet Microsoft security standards. Maybe that should have been the headline instead of the clickbait trying to portray Microsoft as evil.

 

Cyberattack on Border Patrol subcontractor worse than previously reported

Your security is only as strong as your weakest vendor partner. Your security protocols could be the best in the world but it won't matter if you allow a partner access and they cause the breach.

 

Nashville is banning electric scooters after a man was killed

This is absurd. The scooters didn't do anything wrong. They should not be penalized for the actions of a drunk person making bad choices. I look forward to the mayor banning cars the next time a drunk driver kills someone in downtown Nashville.

 

Words can not describe the glory that is Data Grillen:

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Jim Hansen. Jim offers three suggestions to help with security, including using automation and baselines. I agree with Jim that one of the keys is to try to keep security in the background, and I’d add it’s also important to be vigilant and stick to it.

 

Government networks will continue to be an appealing target for both foreign and domestic adversaries. Network administrators must find ways to keep the wolves at bay while still providing uninterrupted and seamless access to those who need it. Here are three things they can do to help maintain this delicate balance.

 

1. Gain Visibility and Establish a Baseline

 

Agency network admins must assess how many devices are connected to their networks and who’s using those devices. This information can help establish visibility into the scope of activity taking place, allow teams to expose shadow IT resources, and root out unauthorized devices and users. Administrators may also wish to consider whether or not to allow a number of those devices to continue to operate.

 

Then, teams can gain a baseline understanding of what’s considered normal and monitor from there. They can set up alerts to notify them of unauthorized devices or suspicious network activity outside the realm of normal behavior.

 

All this monitoring can be done in the background, without interrupting user workflows. The only time users might get notified is if their device or activity is raising a red flag.

 

2. Automate Security Processes

 

Many network vulnerabilities are caused by human error or malicious insiders. Government networks comprise many different users, devices, and locations, and it can be difficult for administrators to detect when something as simple as a network configuration error occurs, particularly if they’re relying on manual network monitoring processes.

 

Administrators should create policies outlining approval levels and change management processes so network configuration changes aren’t made without approval and supporting documentation.

 

They can also employ an automated system running in the background to support these policies and track unauthorized or erroneous configuration changes. The system can scan for unauthorized or inconsistent configuration changes falling outside the norm. It can also look for non-compliant devices, failed backups, and even policy violations.

 

When a problem arises, the system can automatically correct the issue while the IT administrator surgically targets the problem. There’s no need to perform a large-scale network shutdown.

 

Automated and continuous monitoring for government IT can go well beyond configuration management, of course. Agencies can use automated systems to monitor user logs and events for compliance with agency security policies. They can also track user devices and automatically enforce device policies to help ensure no rogue devices are using the network.

 

Forensic data captured by the automated system can help trace the incident back to the source and directly address the problem. Through artificial intelligence and machine learning, the system can then use the data to learn about what happened and apply that knowledge to better mitigate future incidents.

 

3. Lock down security without compromising productivity

 

The systems and strategies outlined above can maintain network security for government agencies without interfering with workers’ productivity. Only when and if something comes up is a user affected, and even then, the response will likely be as unobtrusive as simply denying network access to that person.

 

In the past, that kind of environment has come with a cost. IT professionals have had to make a choice between providing users with unfettered access to the tools and information they need to work or tightening security to the point of restriction.

 

Fortunately, that approach is no longer necessary. Today, federal IT administrators can put security at the forefront by making it work for them in the background. They can let the workers work—and keep the hackers at bay.

 

Find the full article on GCN.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Are you excited for this post? I certainly know I am!

 

If this is the first article you're seeing of mine and you haven't read my prior article, "Why Businesses Don't Want Machine Learning or Artificial Intelligence," I advise you go back and read it as a primer for some of the things I'm about to share here. You might get “edumicated,” or you might laugh. Either way, welcome.

 

Well, officially hello returning readers. And welcome to those who found their way here due to the power of SEO and the media jumping all over any of the applicable buzzwords in here.

 

The future is now. We’re literally living in the future.

 

Tesla Self-Driving, look ma no hands!

Image: Tesla/YouTube

 

That's the word from the press rags if you've seen the video of Tesla running on full autopilot and doing a complicated series of commute/drives cited in the article, "New Video Shows Tesla's Self-Driving Technology at Work." And you would be right. Indeed, the future IS now. Well, kind of. I mean it's already a few seconds from when I wrote the last few words, so I'm clearly traveling through time...

 

But technology and novelty like driving in traffic are a little more nuanced than this.

 

"But I want this technology right now. Look, it works. Stop your naysaying. You hate Tesla, blah blah blah, and so on."

 

That feels very much like Apple/Android fanboy or fan-hate when someone says anything negative about a thing they want/like/love. Nobody wants this more than me. (Well, except for the robots ready to kill us.)

 

Are There Really Self-Driving Teslas?

 

You might be surprised to know that Tesla has advanced significantly in the past few years. I know, imagine that—so much evolution! But just as we reward our robot overlords for stopping at a stop sign, they're only as good as the information we feed them.

 

We can put the mistakes of 2016 behind us with tragedies like this: "Tesla self-driving car fails to detect truck in a fatal crash."

 

Fortunately, Tesla continues to improve and get better and we'll be ready to be fully autonomous with self-driving vehicles roaming the roads without flaw or problem by 2019. (Swirly lines, and flashback to just a few months into 2019: Tesla didn't fix an Autopilot problem for three years, and now another person is dead.)

 

Is the Tesla Autopilot Safe?

 

Around this time, as I was continuing my study, research, and analysis of this and problems like it, I came across the findings of @greentheonly.

 

Truck's are misunderstood and prefer to be identified as Overpasses

Image: https://twitter.com/greentheonly/status/1105996445251977217

 

And rightly so, we can call this an anomaly. This doesn't happen that frequently. It's not a big deal, except for when it does happen. Not only just when, but the fact that it does happen… whether it's seeing the undercarriage of a truck and interpreting it as an overpass and thus you can "safely pass" under it, shearing the top off of the cab, or seeing a truck to the side of you and interprets the space beneath the truck as a safe “lane” to change into.

 

But hey, that's Autopilot. We're not going to use that anymore until it's solid, refined, and safe. Then the AI and ML can't kill me. I'll be making all the decisions.

 

 

ELDA Taking Control!ELDA may be the death of someone!

Image: https://twitter.com/KevinTesla3/status/1134554915941101569

 

If you recall in the last article, I mentioned the correlation of robots, Jamba Juice, and plasma pumps. Do you ever wonder why Boston Dynamics doesn't have robot police officers like the ED-209 working on major metro streets, providing additional support akin to RoboCop? (I mean, other than the fact that they're barely allowed to use machine learning and computer vision. But I digress.)

 

It’s because they're not ready yet. They don't have things fully baked. They need a better handle on the number of “faults” that can occur, not to mention liability.

 

Is Autonomous Driving Safe?

 

Does this mean, though, that we should stop where we are and no one should use any kind of autonomous driving function? (Well, partially...) The truth is, there are, on average, 1.35 million road traffic deaths each year. Yes, that figure is worldwide, but that figure is also insanely staggering. If we had autonomous vehicles, we could greatly diminish the number of accidents we experience on the roads, which could bring those death tolls down significantly.

 

And someday, we will get there. The vehicles’ intelligence is getting better every day. They make mistakes, sometimes not so bad—"Oh, I didn't detect a goose on the road as an object/living thing and ended up running it over." Or, "The road was damaged in an area, so we didn't detect that was a changing lane/crossing lane/fill-in-the-blank of something else."

 

The greatest strength of autonomous vehicles like Tesla, Waymo, and others is their telemetry. But their greatest weakness is their reliance solely on some points of telemetry.

 

Can Self-Driving Cars Ever Be Safe?

 

In present-day 2019, we rely on vehicles with eight cameras (hey, that's more eyes than us humans have!), some LIDAR data, and a wealth of knowledge of what we should do in conditions on roadways. Some complaints I've shared with various engineers of some of these firms are the limitations of these characteristics, mainly that the cameras are fixed, unlike our eyes.

 

Landslide!

Video: https://youtu.be/SwfKeFA9iEo?t=22

 

So, if we should encounter a rockslide, a landslide, something falling from above (tree, plane, meteorite, car-sized piece of mountain, etc.) we'll be at the will of the vehicle and its ability to identify and detect this. This won't be that big of a deal, though. We'll encounter a few deaths here or there, it'll make the press, and they'll quietly cover it up or claim to fix it in the next bugfix released over the air to the vehicles (unlike the aforementioned problem that went unsolved for three years).

 

The second-biggest problem we face is, just like us, these vehicles are (for the most part) working in a vacuum. A good and proper future of self-driving autonomy will involve the vehicles communicating with each other, street lights, traffic cameras, environmental data, and telemetry from towers, roads, and other sensors. Rerouting traffic around issues will become commonplace. When an ambulance is rushing someone to a hospital, it can clear the roadways in favor of emergency vehicles. Imagine if buses ran on the roads efficiently. The same could be true of vehicles.

 

That's not a 2020 vision. It’s maybe a 2035 or 2050 vision in some cities. But this is a future that can be well seen ahead of us.

 

The Future of Tesla and Self-Driving Vehicles

 

It may seem like I’m critical of Tesla and their Autopilot programs. That’s because I am. I see them jumping before they crawl. I've seen deaths rack up, and I've seen many VERY close calls. It's all in the effort of better learning and training. But it's on the backs of consumers and on the graves of the end users who've been made to believe these vehicles are tomorrow's self-drivers. In reality, they’re in an Alpha state with the sheer limited amount of telemetry available.

 

Will I use Autopilot? Yeah, probably... and definitely in an effort of discovering and troubleshooting problems because I'm the kind of geek who likes to understand things. I don't have a Tesla yet, but that's only a matter of time.

 

So, I obviously cannot tell you what to do, with your space-age vehicle driving you fast forward into the future, but I will advise you to be cautious. I've had to turn ELDA off on my Chevy Bolt as it has been steering me into traffic, and that effectively has little to nothing I would consider "intelligent" in the grand scheme of things.

 

At the start of this article, I asked if you were as excited as I was. I'm not going to ask if you're as terrified as I am! I will ask you to be cautious. Cautious as a driver, cautious as a road-warrior. The future is close, so let's see you live to see it with the rest of us. Because I look forward to a day where the number 10 cause of death worldwide is a thing of the past.

 

Thank you and be safe out there!

We put in all this work to create a fully customized toolchain tailored to your situation. Great! I bet you're happy you can change small bits and pieces of your toolchain when circumstances change, and the Periodic Table of DevOps tools has become a staple when looking for alternatives. You're probably still fighting the big vendors that want to offer you a one-size-fits-all solution.

 

But there's a high chance you're drowning in tool number 9,001. The tool sprawl, like VM sprawl after we started using virtualization 10-15 years ago, is real. In this post, we'll look at managing the tools managing your infrastructure.

 

A toolchain is supposed to make it easier for you to manage changes, automate workflows, and monitor your infrastructure, but isn't it just shifting the problem from one system to another? How do we prevent spending disproportionate amounts of time managing the toolchain, keeping us from more important value-add work?

 

The key here isn’t just to look at your infrastructure and workflows with a pair of LEAN glasses, but also to look at the toolchain with the same perspective. Properly integrating tools and selecting the right foundation makes all the difference.

 

Integration

One key aspect of managing tool sprawl is properly integrating all the tools. With each handover between tools, there's a chance of manual work, errors, and friction. But what do we mean by properly integrating? Simply put: no manual work between steps, and as little as possible custom code or scripting.

 

In other words, tools integrating natively with each other take less custom work. The integration comes ready out of the box, and are an integral, vendor-supported part of the product. This means you don't have to do (much) work to integrate it with different tools. Selecting tools that natively integrate is a great way to reduce the negative side effects of tool sprawl.

 

Fit for Purpose to Prevent DIY Solutions

Some automation solutions are very flexible and customizable, like Microsoft PowerShell. It's widely adopted, very flexible, highly customizable, and one of the most powerful tools in an admin's tool chest, but getting started with it can be daunting. This flexibility leads to some complexity, and often there's no single way of accomplishing things. This means you need to put in the work to make PowerShell do what you want, instead of a vendor providing workflow automation for you. Sometimes, it's worthwhile to use a fit-for-purpose automation tool (like Jenkins, SonarQube, or Terraform) that has automated the workflows for you instead of a great task automation scripting language. Developing and maintaining your own scripts for task automation and configuration management can easily take up a large chunk of your workweek, and some of that work has little to do with the automation effort you set out to accomplish in the first place. Outsourcing that responsibility to the company (or open-source project) that created the high-level automation logic (Terraform by HashiCorp is a great example of this) makes sense, saving your time for job-specific work adding actual value to your organization.

 

If you set out to use a generic scripting language, choose one that fits your technical stack (like PowerShell if you're running Microsoft stack) and technical pillar (such as Terraform for infrastructure-as-code automation).

Crisis Averted. Here's Another.

So, your sprawl crisis has been averted. Awesome; now we have the right balance of tools doing what they're supposed to do. But there's a new potential hazard around the corner: lock-in.

 

In the next and final post in this series, we'll have a look at the different forms of lock-in: vendor lock-in, competition lock-in, and commercial viability. 

 

Home from San Diego and Cisco Live! for a few days, just enough time to spend Father's Day with friends and family. Now I'm back on the road, heading to Lingen, Germany, for Data Grillen. That's right, an event dedicated to data, bratwurst, and beer. I may never come home.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

This Austrian company is betting you’ll take a flying drone taxi. Really.

I *might* be willing to take one, but I’d like to know more about how they plan to control air traffic flow over city streets.

 

How Self-Driving Cars Could Disrupt the Airline Industry

Looks like everyone is going after airline passengers. If airlines don’t up their level of customer service, autonomous cars and flying taxis could put a dent in airline revenue.

 

Salesforce to Buy Tableau for $15.3 Billion in Analytics Push

First, Google bought Looker, then Salesforce buys Tableau. That leaves AWS looking for a partner as everyone tries to keep pace with Microsoft and PowerBI.

 

Genius Claims Google Stole Lyrics Embedded With Secret Morse Code

After decades of users stealing content from websites found with a Google search, Google decides to start stealing content for themselves.

 

Your Smartphone Is Spying (If You’re a Spanish Soccer Fan)

Something tells me this is likely not the only app that is spying on users. And while La Liga was doing so to catch pirated broadcasts, this is a situation where two wrongs don’t make a right.

 

Domino’s will start robot pizza deliveries in Houston this year

How much do you tip a robot?

 

An Amazing Job Interview

I love this approach, and I want you to consider using it as well, no matter which side of the interview you are sitting.

 

Sorry to all the other dads out there, but it's game over, and I've won:

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp about choosing a database tool to help improve application performance and support advanced troubleshooting. Given the importance of databases to application performance, I’m surprised this topic doesn’t get more attention.

 

The database is at the heart of every application; when applications are having performance issues, there’s a good chance the database is somehow involved. The greater the depth of insight a federal IT pro has into its performance, the more opportunity to enhance application performance.

 

There’s a dramatic difference between simply collecting data and correlating and analyzing that data to get actionable results. For example, often, federal IT teams collect highly granular information that’s difficult to analyze; others may collect a vast amount of information, making correlation and analysis too time-consuming.

 

The key is to unlock a depth of information in the right context, so you can enhance database performance and, in turn, optimize application performance.

 

The following are examples of “non-negotiables” when collecting and analyzing database performance information.

 

Insight Across the Entire Environment

 

One of the most important factors in ensuring you’re collecting all necessary data is to choose a toolset providing visibility across all environments, from on-premises, to virtualized, to the cloud, and any combination thereof. No federal IT pro can completely understand or optimize database performance with only a subset of information.

 

Tuning and Indexing Data

 

Enhancing database performance often requires significant manual effort. That’s why it’s critical to find a tool that can tell you where to focus the federal IT team’s tuning and indexing efforts to optimize performance and simultaneously reduce manual processes.

 

Let’s take database query speed as an example. Slow SQL queries can easily result in slow application performance. A quality database performance analyzer presents federal IT pros with a single-pane-of-glass view of detailed query profile data across databases. This guides the team toward critical tuning data while reducing the time spent correlating information across systems. What about indexing? A good tool will identify and validate index opportunities by monitoring workload usage patterns, and then make recommendations regarding where a new index can help optimize inefficiencies.

 

Historical Baselining

 

Federal IT pros must be able to compare performance levels—for example, expected performance with abnormal performance—by establishing historic baselines of database performance to look at performance at the same time on the same day last week, and the week befor, etc.

 

This is the key to database anomaly detection. With this information, it’s easier to identify a slight variation—the smallest anomaly—before it becomes a larger problem.

 

Remember, every department, group, or function within your agency relies on a database in some way or another, particularly to drive application performance. Having a complete database performance tool enables agencies to stop finger-pointing and pivot from being reactive to being proactive; from having high-level information to having deeper, more efficient insight.

 

This level of insight can help optimize database performance and make users happier across the board.

 

Find the full article on our partner DLT’s blog Technically Speaking.

 

  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

This is a continuation of one of my previous blog posts, Which Infrastructure Monitoring Tools Are Right for You? On-Premises vs. SaaS, where I talked about the considerations and benefits of running your monitoring platform in either location. I touched on the two typical licensing models geared either towards a CapEx or an OpEx purchase. CapEx is something you buy and own, or a perpetual license. OpEx is usually a monthly spend that can vary month-to-month, like a rental license. Think of it like buying Microsoft Office off-the-shelf (who does that anymore?) where you own the product, or you buy Office 365 and pay month-by-month for the services that you consume.

 

Let’s Break This Down

As mentioned, there are the two purchasing options. But within those, there are different ways monitoring products are licensed, which could ultimately sway your decision on which product to choose. If your purchasing decisions are ruled by the powers that be, then cost is likely one of the top considerations when choosing a monitoring product.

 

So what license models are there?

 

Per Device Licensing

This is one of the most common operating models for a monitoring solution. What you need to look at closely, though, is the definition of a device. I have seen tools classify a network port on a switch as an individual device, and I have seen other tools classify the entire switch as a device. You can win big with this model if something like a VMware vCenter server or an IPMI management tool is classed as one device. You can effectively monitor all the devices managed by the management tool for the cost of a single device license.

 

Amount of Data Ingested

This model seems to apply more to SaaS-based security information and event management (SIEM) solutions, but may apply to other products. Essentially, you’re billed per GB of data ingested into the solution. The headline cost for 1GB of data may look great, but once you start pumping hundreds of GBs of data into the solution daily, the costs add up very quickly. If you can, evaluate log churn in terms of data generated before looking at a solution licensed like this.

 

Pay Your Money, Monitor Whatever You Like

Predominantly used for on-premises monitoring solutions deployments, you pay a flat fee and you can monitor as much or as little as you like. The initial buy-in price may be high, but if you compare it to the other licensing models, it may work out cheaper in the long run. I typically find established players in the monitoring solution game have on-premises and SaaS versions of their products. There will be a point where paying the upfront cost for the on-premises perpetual license is cheaper than paying for the monthly rental SaaS equivalent. It’s all about doing your homework on that one.

 

Conclusion

I may be preaching to the choir here, but I hope there are some useful snippets of info you can take away and apply to your next monitoring solution evaluation. As with any project, planning is key. The more info you have available to help with that planning, the more successful you will be.

Information Security artwork, modified from Pete Linforth at Pixabay.

In post #3 of this information security series, let's cover one of the essential components in an organization's defense strategy: their approach to patching systems.

Everywhere an Attack

When did you NOT see a ransomware attack or security breach story in the news? And when was weak patching not cited as one of the reasons making the attack possible? If only the organization had applied those security patches from a couple of years ago…

 

Baltimore City is one of the most recent ransomware attacks in the news. For the past month, RobbinHood ransomware, a variant of NSA's EternalBlue, crippled many of the city's online processes like email, paying water bills, and paying parking tickets. More than four weeks later, city IT workers are still working diligently to restore operations and services.

 

How could Baltimore have protected itself? Could it have applied those 2017 WannaCry exploit patches from Microsoft? Sure, but it's never just one series of patches that weren’t applied.

 

Important Questions

How often do you patch? How do you find out about vulnerabilities? What processes do you have in place for testing patches?

 

How and when an organization patches systems tells you a lot about how much they value security and their commitment to designing systems and processes for those routine maintenance tasks critical to an organization's overall security posture. Are they proactive or reactive? Patching (or lack thereof) can shine a light on systematic problems.

 

If an organization has taken the time to implement systems and processes for security patch management and applying security updates, the higher-visibility areas of their information security posture (like identity management, auditing, and change management) are likely also in order. If two-year-old patches weren't applied to public-facing web servers, you can only guess what other information security best practices have been neglected.

 

If you've ever worked in Ops, you know that a reprieve from patching systems and devices will probably never come. Applying patches and security updates is a lot like doing laundry. It's hard work and no matter how much you try, you’ll never catch up or finish. When you let it slide for a while, the catch-up process is more time-consuming and arduous than if you had stayed vigilant in the first place. However, unlike the risk of having to wear dirty clothes, the threat of not patching your systems is a serious security problem with real-world consequences.

 

Accountability-Driven Compliance

We all do better jobs when we’re accountable to someone, whether it be an outside organization or even yourself. If your mother-in-law continuously checked in to make sure you never fell behind on your laundry, you would be far more dutiful on keeping up with washing your clothes. If laundry is a constant challenge for you (like me), you would probably design a system to make sure you could keep up with laundry duties.

 

In the federal space, continuous monitoring is a tenet of government security initiatives like the Federal Risk and Authorization Program (FedRAMP) and Risk Management Framework (RMF). These initiatives centralize accountability and consequences while driving continuous patching in organizations. FedRAMP and RMF assess an organization's risk. Because unpatched systems are inherently risky, failure to comply can result in revoking an organization's Authority to Operate (ATO) or shutting down their network.

 

How can you tell if systems are vulnerable? Most likely, you run vulnerability scans that leverage Common Vulnerability and Exploits (CVE) entries. This CVE list, an "international cybersecurity community effort," drives most security vulnerability assessment tools in on-premises and off-premises products like Nessus and Amazon Inspector. In addition, software vendors leverage the CVE lists to develop security patches for their products. Vulnerability assessment and continuous scanning end up driving an organization's continuous patching stance.

 

Vendor Questions

FedRAMP also assesses security and accredits secure cloud offerings and services. This initiative allows federal organizations to tap into the "as a Service" world and let outside organizations and third-party vendors assume the security, operations, and maintenances of applications, of which patching and applying updates are an important component.

When selecting vendors or "as a service" providers, you could assess part of their commitment to security by inquiring about software component versions. How often do they issue security updates? Can you apply minor software updates out-of-cycle without affecting support?

 

If a software vendor's latest software release deployed a two-year-old version of Tomcat Web Server or a version is close to end-of-support with no planned upgrades, it may be wise to question their commitment to creating secure software.

 

Conclusion

The odds are that some entity will assess your organization's risk, whether it's a group of hackers looking for an easy target, your organization's security officer, or an insurance company deciding whether to issue a cyber-liability insurance policy. Here’s one key metric that will interest these entities: how many unpatched systems and vulnerabilities are lying around on your network and the story your organization's patching tells.

Did you come here hoping to read a summary of the past 5+ years of my research on self-driving, autonomous vehicles, Tesla, and TNC businesses?

Well, you’re in luck… that’s my next post.

 

This post helps make that post far more comprehensible. So here we lay the foundation, with fun, humor, and excitement. (Mild disclaimer for those who may have heard me speak on this topic before or listened to this story shared in talks and presentations. I hope you have fun all the same.)

 

I trust you’re all relatively smart and capable humans with a wide knowledge, depth, and breadth of the world, so for the following image, I want you to take a step back from your machine. In fact, I want you to look at it from as far as you possibly can, without putting a wall between you and the following image.

 

THIS IS A HOT DOG

 

OK, have you looked at the image? Do you know what it is? If you're not sure, and perhaps have a child or aging grandparents, I encourage you to ask them.

 

Did you get hot dog? OK, cool. We’re all on the same page here.

 

Now as a mild disclaimer to my prior disclaimer—I’m familiar with the television series Silicon Valley. I don’t watch it, but I know they had a similar use-case in that environment. This is not that story, but just as when I present on computer vision being used for active facial recognition by the Chinese government to profile the Muslim minority in China and people say, "Oh, you mean like Black Mirror..." I mean “like that,” but I mean it's real and not just "TV magic."

 

Last year in April (April 28, 2018), this innocuous meat enclosure was run through the paces on our top four favorite cloud and AI/ML platforms, giving us all deep insight into how machines work, how they think, and what kinds of things we like to eat for lunch on a random day—the 4th day of July, perhaps. These are our findings.

 

I think we can comfortably say Google is the leader in machine learning and artificial intelligence. You can disagree with that, but you'd likely be wrong. Consider one of the leading machine learning platforms Tensor Flow (TF), which is an open-source project by Google. TF was recently released in 2.0 as an Alpha, so you might think "Yeah, immature product is more like it." But when you peel back that onion a bit and realize Google has been using it internally for 20 years to power search and other internal products, you might feel it more appropriate to call it version 25.0, but I digress.

 

What does Google think a hotdog is?

Image: https://twitter.com/cloud_opinion/status/989691222590550016

 

As you can see, if we ask Google, "Hey, what does that look like to you?" it seems pretty spot on. It doesn't come right out and directly say, "that’s a hot dog," but we can't exactly argue with any of its findings. (I'll let you guys argue over eating hot dogs for breakfast.) So, we're in good hands. Google is invited to MY 4th of July party!

 

But seriously, who cares what Google thinks? Who even uses Google anyway? According to the 2018 cloud figures, Microsoft is actually the leader in cloud revenue, so I'm not likely to even use Google. Instead, I'm a Microsoft-y through and through. I bleed Azure! So, forget Google. I only care about what my platform of choice believes in, because that's what I'll be using.

 

What Microsoft KNOWS a hotdog is!

Image: https://twitter.com/cloud_opinion/status/989691222590550016

 

Let’s peel back the chopped white onion and dig into the tags a bit. Microsoft has us pegged pretty heavily with its confidences, and I can't say I agree more. With 0.95 confidence, Microsoft can unequivocally and definitively say, without a doubt, I am certain:

 

This is a carrot.

 

We're done here. We don't need to say any more on this matter. And to think we Americans are only weeks away from celebrating the 4th of July. So, throw some carrots on the BBQ and get ready, because I know how I'll be celebrating!

 

Perhaps that's too much hyperbole. It’s also 0.94 confident that it's "hot" (which brings me back to my Paris Hilton days, and I'm sure those are memories you've all worked so hard to block out). But... 0.66 gives us relative certainty that if it wasn't just a carrot, it's a carrot stick. I mean, obviously it's a nicely cut, shaved, and cleaned-off carrot stick. Throw it on the BBQ!

 

OK, but just as we couldn't figure out what color “the dress” was, Microsoft knows what’s up, at 0.58 confidence. It's orange. Hey, it got something right. I mean, I figure if 0.58 were leveraged as a percentage instead of (1, 0, -1) it would be some kind of derivative of 0.42 and it would think it’s purple. (These are the jokes, people.)

 

But, if I leave you with nothing more than maybe, just MAYBE, if it's not a carrot and definitely NOT a carrot stick... it's probably bread, coming in at 0.41 confidence. (Which is cute and rings very true of The Neural Net Dreams of Sheep. I'm sure the same is true of bread, too. Even the machines know we want carbs.)

 

But who really cares about Microsoft, anyway? I mean, I'm an AWS junkie, so I use Amazon as MY platform of choice. Amazon was the platform of choice of many police departments to leverage its facial recognition technology to profile criminals, acts, and actions to do better policing and protect us better. (There are so many links on this kind of thing being used; picking just one was difficult. Feel free to read up on how police are trying to use this, and how some states and cities are trying to ban it.)

 

Obviously, Amazon is the only choice. But before we dig into that, I want to ask you a question. Do you like smoothies? I do. My favorite drink is the Açai Super Anti-Oxidant from Jamba Juice. So, my fellow THWACKers, I have a business opportunity for you here today.

 

Between you, me, and Amazon, I think we can build a business to put smoothie stores out of business, and we can do that through the power of robots, artificial intelligence, and machine learning powered by Amazon. Who's with me? Are you as excited as I am?

 

Amazon prefers you to DRINK your Hotdogs

Image: https://twitter.com/cloud_opinion/status/989691222590550016

 

The first thing I'll order will be a sweet caramel dessert smoothie. It will be an amazing confectionery that will put your local smoothie shop out of business by the end of the week.

 

At this point, you might be asking yourself, "OK, this has been hilarious, but wasn't the topic something about businesses? Did you bury the lede?”

 

So, I'll often ask this question, usually of engineers: do businesses want machine learning? And they'll often say yes. But the truth is really, WE want machine learning, but businesses want machine knowledge.

 

It may not seem so important in the context of something silly like a hot dog, but, when applied at scale and in diverse situations, things can get very grave. The default dystopian future or “realistic” future I lean towards is plasma pumps in a hospital. Imagine a fully machine controlled and AI-leveraged device like a plasma pump. It picks out your blood type based on your chart or perhaps it pricks your finger, figures out what blood it should give you, and starts to administer it. Not an unrealistic future, to be honest. Now, what if it was accurate 99.99999% of the time? Pretty awesome, right? But let's say it was as accurate as Google was with a hot dog, at 98% confidence. A business can hardly accept the liability that 99.9999999% might give. Drop that down to 98%, or the more realistic 90-95%, and that is literal deaths on our hands. 

 

Yes, businesses don't really want machine learning. They say they do because it's popular and buzzword-y, but when lives are potentially on the line, tiny mistakes come with a cost. And those mistakes add up, which can affect the confidence the market can tolerate. But hey, how much IS a life really worth? If you can give up your life to make a device, system, or database run by machine learning/AI better for the next person, versus, oh I don't know, maybe training your models better and a little longer—is it worth it?

 

There will come a point or many points where there are acceptable tolerances, and we'll see those tolerances and accept them for what they are at a certain point because, "It doesn't affect me. That's someone else's problem." Frankly, the accuracy of machine learning has skyrocketed in the past few years alone. That compounded with better, faster, smarter, and smaller TPUs (tensor processing units) means we truly are in a next-generation era in the kinds of things we can do. 

 

Google Edge TPU on a US Penny

Image: Google unveils tiny new AI chips for on-device machine learning - The Verge

 

Yes, indeed, the future will be here before we know it. But mistakes can come with dire costs, and business mistakes will cost in so many more ways because we "trust" businesses to do the better or the right thing.

 

"Ooh, ooh! You said four cloud providers. Where's IBM Watson?"

 

Yes, you're right. I wasn't going to forget IBM Watson. After all, they're the businessman's business cloud. No one ever got fired for buying IBM, blah blah blah, so on and so forth.

 

I always include this for good measure. No, not because it’s better than Google, Microsoft, or Amazon, but because of how funny it is.

 

IBM is like a real business cloud AI ML!

Image: https://twitter.com/cloud_opinion/status/989762287819935744

 

I’ll give IBM one thing—they're confident in the color of a hot dog, and really, what more can we ask for?

 

Hopefully, this was helpful, informative, and funny (I was really angling on funny). But seriously, you should have a better foundation for the constantly “learning” nature of the technology we work with. Consider that machines have a large knowledge base of information, and what they can learn in such a short time to make determinations about what a particular object is. Then also consider if you have young children. Send them into the kitchen and say, "Bring me a hot dog," and you're far more likely to get a hot dog back than you are to get a carrot stick (or a sweet caramel dessert). The point being, the machines that learn and the "artificial" intelligence we expect them to work with and operate from have the approximate intelligence of a 2- or 3-year-old.

 

They will get better, especially as we throw more resources at them (models, power, TPUs, CPU, and so forth) but they're not there yet. If you liked this, you'll equally enjoy the next article (or be terrified when we get to the actual subject matter).

 

Are you excited? I know I am. Let us know in the comments below what you thought. It's a nice day here in Portland, so I'm going to go enjoy a nice hot dog smoothie. 

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.