1 2 3 Previous Next

Geek Speak

2,606 posts

Back from VMworld Barcelona for a couple weeks before heading out to AWS re:Invent. If you are heading to re:Invent, let me know. I'd love to talk data with you.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

VMware, IBM Roll Out More Hybrid Cloud Features

VMware has strong ties with AWS and IBM. I’m waiting to hear of a similar partnership with Microsoft Azure. Not sure what is left for Azure to offer VMware that IBM and AWS cannot also offer.

 

‘Keep Talkin’ Larry’: Amazon Is Close to Tossing Oracle Software

I’m a little surprised they haven’t tossed it already. They certainly don’t need anything Oracle has to offer.

 

Alarm over talks to implant UK employees with microchips

Gives new meaning to “micromanagement.”

 

An AI Lie Detector Is Going to Start Questioning Travelers in the EU

Another industry facing the loss of jobs to robots. And the fact that humans are programming these robots you can except them to develop biases and use profiling, no matter how much data they collect.

 

Cyber attacks are the biggest risk, companies say

If you aren’t familiar with the phrase “assume compromise,” learn it now. Stop fooling yourself that you will be able to avoid any attack, and start thinking about how you can detect and remediate.

 

Linear Regression in Real Life

Seriously good explanation of linear regression, a simple and effective method for predictive analytics. Yes, there’s math involved in this post. Don’t be afraid, you got this. And when you’re done, you can think about how to apply this method to your current role.

 

25 years ago today the first major web browser was released: Mosaic 1.0

This was the first browser I used while doing research work at Washington State University. Get off my lawn.

 

The view from my office last week, next to Placa Espanya:

 

Ryan Adzima

AI-nxiety

Posted by Ryan Adzima Nov 14, 2018

So far I’ve written about my love and hatred of what AI can do for me (or cost me) as a network owner. This post is going to cover why I’m afraid to let a machine take the wheel and use all that artificial brain power to drive my network.

 

A big joke across any field affected by AI is that Skynet and the Terminators are coming. While I don’t share that exact fear, I do wonder what the dangers are of letting an unfeeling, politically unaware machine make important decisions for the operations of my network. I’m also not in the camp that thinks AI will replace me. It could change my role a bit maybe, but nothing more in the foreseeable future. So what about AI gives me panic attacks? One word - decisions.

 

As I covered in previous posts, the ability to find and correlate information at scale is a huge benefit for anyone running a network. When an AI can scour all my logs and watch live traffic flows to find issues and alert on them, it’s a massive gain. The next logical step would be to allow the AI to run preconfigured actions to remedy the situation. Now you’ve got an intelligently automated network. But what if we go one step further and let the AI start making decision about the network and servers. What if we let the AI start optimizing the network? I’m not talking about a simple "if this, then that" automation script. I’m talking about letting the AI actually pore over the data, devices, and history to make its own decisions about how the network should be configured, for the greater good. This is where things get a bit hairy for me. In the past I’ve used a somewhat morbid example of autonomous vehicles to make the point, and I think it’s a pretty good analogy.

 

Imagine an autonomous vehicle with you as a human passenger driving down a suburban street, humming along in what has been deemed the safest and most efficient manner. Suddenly, from behind a blind intersection, a group of children pop out on the left while simultaneously on the right a group of business people out to lunch emerge. The AI quickly scans the environment and sees only three possible outcomes.

 

  • Option 1: Save the children by driving into the group of business people.
  • Option 2: Save the business people driving into the children.
  • Option 3: Save them both by swerving into a nearby brick wall, ultimately resulting in the passengers doom.

 

How is this decision made? Should the developers hardcode a command to always save the passenger? What if the toll is higher next time and it’s a school bus or even a high-profile politician about to pass laws feeding the hungry while simultaneously removing all taxes? It’s a bit of a conundrum, and like I said, it’s a bit morbid. But it does highlight some things we can’t yet teach AI: situational knowledge, context outside of its training, and politics.Yes, politics. Every office has them.

 

Take that scenario and translate it into our world. Your network is humming along beautifully when suddenly one of your executives attempts to connect an ancient wireless streaming audio device that will have a huge performance impact on all the workers around him. The logical thing to do in this situation is to simply deny the connection of the device. Clearly the ability of the other employees to do their jobs outweighs a CxO’s ability to stream some audio on an old device, and the greater good is obviously more important. Unless it’s not. This is what scares me about AI.

 

Even though my example story may have been a bit out there, I hope it helps show you why I am afraid of what infrastructure AI will likely be poised to do in the not too distant future (if not already).

In honor of Stanley Martin Lieber, z''l

Known to most of the world as "Stan Lee"

1922-2018

 

When we first moved into the Orthodox Jewish world, we were invited to a lot of people's houses for a lot of meals. The community is very tight-knit, and everyone wants to meet new neighbors as soon as they arrive, and so it was something that just happened. Being new – both to the community and to orthodox Judaism in general – I noticed things others might have glossed over. Finally, at the third family’s home, I couldn't contain my curiosity. I asked if everyone we had visited so far were related. No, came the reply, why would I think that? Because, I explained, everyone had the same picture of the same grandfatherly man up on the wall:

 

Image source: Rabbi Moshe Feinstein, from Wikipedia

 

Our hosts were now equal parts confused and amused. "That's Rabbi Moshe Feinstein," they explained. "He's not our grandfather. He's not the related to anyone in the community, as far as we know."

 

"Then why on earth," I demanded, "is his picture on the walls of so many people's houses around here?"

 

The answer was simple, but it didn't make sense to me, at least at the time. People put up pictures of great Rabbis, I was told, because they represent who they aspire to become. By keeping their images visibly present in the home, they hoped to remind themselves of some aspect of their values, their ethics, their lives.

 

 

 

 

***********************

 

Several years later I was teaching a class of orthodox Jewish twenty-somethings about the world of IT. They were learning about everything from hardware to servers to networking to coding, but I also wanted to ensure they learned about the culture of IT. It started off as well as I'd hoped. When I got to sci-fi in general and comic books specifically, I held up a picture:

Image source: You'll Be Safe Here from Something Terrible, by Dean TrippeImage source: You'll Be Safe Here from Something Terrible, by Dean Trippe

 

"Can you identify anyone in this picture?" I asked.

 

Their responses were especially vehement. "Narishkeit" (foolishness) said one guy. "Bittel Torah" (sinful waste of time) pronounced another. And so on.

 

"Well I can name them all," I continued. “Every single one. And you know why? Because these aren't just characters in a story. These are my friends. And at a certain point in my life, they were my best friends. At the hardest times in my life, they were my only friends."

 

Now that they could tell I was serious, the dismissiveness was gone. "But not only that," I continued. "Each character in this picture represents a lesson. A value. A set of ethics. That big green dude? He taught me about what happens when we don't acknowledge our anger. That man with the bow tie? I learned how pure the joy of curiosity could be. And the big blue guy with the red cape? He showed me that it was OK to tone down aspects of myself in some situations, and to let them fly free in others."

 

Then I explained my confusion about the Rabbis on the wall, and how this was very much the same thing, especially for a lot of people working in tech today. And to call it narishkeit was as crude and insulting as it would be to say it was stupid to put up a picture of Rabbi Moshe Feinstein when you're not even related to him.

 

Then I explained where the picture came from. How author Dean Trippe came to write "Something Terrible" in the first place. At this point, my class might not have understood every nuance of what comic books were all about, but they knew it held a deeper significance than they thought.

 

Going back to the picture, I asked, "This picture has a name. Do you know what it's called?"

 

You'll Be Safe Here.

 

That, I explained, was what comic books meant to me – and to so many of us.

 

That’s the world that Mr. Lieber – or Stan Lee, as so many knew him – helped create. That’s the lifeline he forged out of ideas and dreams and pulp and ink. That lifeline meant everything to a lot of us.

 

Ashley McNamara may have put it best: "I repeated 1st grade because I spent that whole year locked in the restroom. The only thing I had were comics. They were an escape from my reality. It was the only thing I had to look forward to and if not for Stan Lee and others I wouldn’t have made it."

 

 

The truth is that "Stan Lee" saved more people than all of his costumed creations combined.

 

And for a lot of people, that's the story. Stan Lee, the man-myth, who helped create a comic empire and was personally responsible for the likes of Spiderman, Captain America, the X-Men, the Black Panther, and so on.

 

But for me there's just a little bit more. For a Jewish kid in the middle of a Midwest suburban landscape, Mr. Lieber had one more comic-worthy twist of fate. You see he, along with his cohort – Will Eisner, Joe Simon, Jack Kirby (Jacob Kurtzberg), Jerry Siegel, Joe Shuster, and Bob Kane (Kahn) – they didn't just SAY they were Jewish. They wove their Jewishness into the fabric of what they created. It obviously wasn't overt – none of the comics were called "Amazing Tales of Moses and his Staff of God!" Nor were Jewish themes subversively inserted. It just... was.

 

Comics told stories which were at once fantastical and familiar to me: a baby put in a basket (I mean rocket ship) and sent to sail across the river (I mean galaxy) to be raised by Pharaoh (I mean Ma and Pa Kent). Or a scrawny, bookish kid from Brooklyn who gets strong and the first thing he does is punch Hitler in the face.

 

And underlying it all was another Jewish concept: “tikkun olam”. Literally, this phrase means “fixing the world” and if I left it at that, you might understand some of its meaning. But on a deeper level, the concept of tikkun olam means to repair the brokenness of the world by finding and revealing sparks of the Divine which infuse everything. When you help another person – and because of your help they are able to rise above their challenges and become their best selves – you’ve performed tikkun olam. When you take a mundane object and use it for a purpose which creates more good in the world, you have revealed the holy purpose for that object being created in the first place, which is tikkun olam.

 

When you look at the weird, exotic, fantastical details of comic books – from hammers and shields and lassos and rings to teenagers who discover what comes with great power; and outcast mutants who save the world which rejects them; and aliens who hide behind mild-mannered facades; and Amazonians who turn away from beautiful islands to run toward danger – when you look at all of that, and you don’t see the idea of tikkun olam at play, well, you’re just not paying attention.

 

Stanley Lieber showed the world (and me) how to create something awesome, incredible, amazing, great, mighty, and fantastic but which could, for all its grandeur, still remain true to the core values that it started with. In fact, in one of his "Stan's Soapbox" responses, he addressed this:

 

“From time to time we receive letters from readers who wonder why there’s so much moralizing in our mags. They take great pains to point out that comics are supposed to be escapist reading, and nothing more. But somehow, I can’t see it that way. It seems to me that a story without a message, however subliminal, is like a man without a soul. In fact, even the most escapist literature of all – old time fairy tales and heroic legends – contained moral and philosophical points of view. At every college campus where I may speak there’s as much discussion of war and peace, civil rights, and the so-called youth rebellion as there is of our Marvel mags per se. None of us lives in a vacuum – none of us is untouched by the everyday events about us – events which shape our stories just as they shape our lives. Sure our tales can be called escapist – but just because something’s for fun, doesn’t mean we have to blanket our brains while we read it! Excelsior!”

 

Excelsior indeed.

To Stanley Martin Lieber, Zichrono Livracha.

(May his memory be for a blessing)

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Here’s an interesting article from SolarWinds associate, David Trossell, where he dives into cloud security concerns at National Health Service (NHS).

 

Moving to the cloud is not a be-all-end-all security solution for NHS organisations.

 

Several press reports claim that NHS Digital now recognizes public cloud services as a safe way of storing health and social care patient data. In January 2018, the UK’s National Health Service Digital press statement cited Rob Shaw, Deputy Chief Executive at NHS Digital.

 

It’s hoped that the standards created by the new national guidance document will enable NHS organizations to benefit from the flexibility and cost savings associated with the use of cloud facilities. However, Shaw says: "It is for individual organisations to decide if they wish to use cloud and data offshoring and there are a huge range of benefits in doing so. These include greater data security protection and reduced running costs, when implemented effectively.” 

 

With compliance to the EU’s General Data Protection Regulations (GDPR) in mind, which came into force in May 2018, the guidance offers greater clarity on how use to cloud technologies. With a specific focus on how confidential patient data can be safely managed, NHS Digital explains that the national guidance document “highlights the benefits for organisations choosing to use cloud facilities.”

 

These benefits can include “cost savings associated with not having to buy and maintain hardware and software, and comprehensive backup and fast recovery of systems.” Based on this, NHS Digital states it believes that these “features cut the risk of health information not being available due to local hardware failure.” However, at this juncture, it should be noted that the cloud is not a one-size-fits-all solution, and so each NHS Trust and body should examine the expressed benefits based on their own business, operational, and technical audits of the cloud.

 

ROI concerns

A report by Digital Health magazine suggests that everything is still not rosy with the public cloud. Owen Hughes headlines that, “Only 17% of NHS trusts expect financial return from public cloud adoption.” This figure emerged from a Freedom of Information request that was sent to over 200 NHS trusts and foundation trusts by the Ireland office of SolarWinds, an IT management software provider. The purpose of this FOI request, which received a response from 160 trusts, was to assess NHS organisation’s plans for public cloud adoption.

 

“The gloomy outlook appears to stem from a variety of concerns surrounding the security and management of the cloud: 61% of trusts surveyed cited security and compliance as the biggest barrier to adoption, followed by budget worries (55%) and legacy tech and vendor lock-in, which scored 53% respectively,” writes Hughes.

 

Key challenges

The research also found that the key challenges faced by the trusts in managing cloud services were caused by determining suitable workloads (49%), and 47% expressed a concern that they might have a lack of control of performance. The primary concern expressed by 45% of the respondents was about how to protect and secure the cloud.

 

To be continued…

 

Find the full article on ITProPortal.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Young girl holding a pen. Photo by Les Anderson on Unsplash

Recently, my friend Phoummala Schmitt, aka “ExchangeGoddess” and Microsoft Cloud Operations Advocate, wrote about her struggles with imposter syndrome (https://orangematter.solarwinds.com/beating-imposter-syndrome/). It's a good read that I highly recommend. But one element of it stuck with me, like an itch I couldn't quite reach.

 

I knew this itch wasn't that someone as obviously talented and accomplished as Phoummala would experience imposter syndrome. It's been well-documented that some of the most high-achieving folks struggle with this issue. It wasn't even the advice to "strike a pose" even though—because I work from home—if I did that too often my family might start taking pictures and trolling me on Twitter.

 

No, the thing that I found challenging was the advice to "fake it."

 

Now, to be clear, there's nothing particularly wrong with adopting a “fake it till you make it” attitude, if that works for you. The challenge is that for many folks, it reinforces exactly the feelings that imposter syndrome stirs up. The knowledge that I am purposely faking something can work against the ultimate goal of me feeling comfortable in my own skin and my own success.

 

Then I caught a quote from Neal Gaiman that went viral. The full post is here (http://neil-gaiman.tumblr.com/post/160603396711/hi-i-read-that-youve-dealt-with-with-impostor), but the part that really caught my eye was this sentence:

 

"Maybe there weren’t any grown-ups, only people [...] doing the best job we could, which is all we can really hope for."

 

Maybe there weren't any grown-ups.

 

This gave me the nugget of an idea. If nobody is actually an adult, then what are we? The obvious answer is that we're still kids wearing grown-up suits. We're all playing pretend.

 

Yes, I know, "playing pretend" is almost the same as "faking it"—except, not really.

 

When you play pretend you acknowledge the reality that Mrs. Finklestein is really a bear wearing your wig, the necklace you stole is out of Mom's jewelry box, and that there's no tea in the cup—but you simply opt to not focus on that part. You’re focusing on how Mrs. Finklestein just told you the most interesting bit of neighborhood gossip, and that this tea is just the right temperature and delicious. When you play pretend, a magical transformation occurs.

 

The movie Hook had a lot of drawbacks, but this scene captures the wonder of imagination pretty well.

 

Imagination can carry us to an important place. A place where we give ourselves permission to go with our craziest guesses, or invest fully in our weirdest ideas. To explore our wildest ramblings and see where it all leads. And more importantly, imagination allows us to run down rabbit holes to a dead end without regret. With imagination, it truly is the journey that matters.

 

I remember a teacher talking about one of her best techniques for helping students get "un-stuck." When a student would say "I don't know," she would respond, "Imagine you did know. What would you say if that was true?" Sometimes, imagining ourselves in a position of knowing is all it takes to knock a recalcitrant piece of knowledge loose.

 

As adults, we may feel that imagination is something we set aside long ago. That may be true, but it wasn't to our benefit.

 

As Robert Fulghum wrote:

 

"Ask a kindergarten class, ‘How many of you can draw?’ and all hands shoot up. Yes, of course we can draw—all of us. What can you draw? Anything! How about a dog eating a fire truck in a jungle? Sure! How big you want it?

 

How many of you can sing? All hands. Of course we sing! What can you sing? Anything! What if you don't know the words? No problem, we make them up. Let's sing! Now? Why not!

 

How many of you dance? Unanimous again. What kind of music do you like to dance to? Any kind! Let's dance! Now? Sure, why not?

 

Do you like to act in plays? Yes! Do you play musical instruments? Yes! Do you write poetry? Yes! Can you read and write and count? Yes! We're learning that stuff now.

 

Their answer is ‘Yes!’ Over and over again, ‘Yes!’ The children are confident in spirit, infinite in resources, and eager to learn. Everything is still possible.

 

Try those same questions on a college audience. A small percentage of the students will raise their hands when asked if they draw or dance or sing or paint or act or play an instrument. Not infrequently, those who do raise their hands will want to qualify their response with their limitations: ‘I only play piano, I only draw horses, I only dance to rock and roll, I only sing in the shower.’

 

When asked why the limitations, college students answer they do not have talent, are not majoring in the subject, or have not done any of these things since about third grade, or worse, that they are embarrassed for others to see them sing or dance or act. You can imagine the response to the same questions asked of an older audience. The answer: no, none of the above.

 

What went wrong between kindergarten and college?

 

What happened to ‘YES! Of course I can’?"

(excerpted from “Uh-Oh: Some Observations from Both Sides of the Refrigerator Door” by Robert Fulghum)

 

So, I want to fuse these ideas together. Ideas that:

  • We sometimes feel like imposters, about to be discovered for the frauds we feel we are
  • "Fake it till you make it" doesn't go far enough to help us avoid those feelings
  • Maybe none of us are actually grown-ups, but instead are still our childlike selves, all acting the part of adults
  • Imagination is one of our most powerful tools to get past our rigid self-image and gives us permission to playact
  • And that the childlike ability to say "YES, of course I can" is infinitely more valuable than we might have once thought

 

Maybe we need to take to heart what Gaiman said. There aren't any grown-ups. Every adult you know is a little kid wearing a big-person suit, muddling along and hoping nobody notices. But we need to take it to heart, accept it, and own it. Own the fact that we're little kids. Reclaim the brash, the bold, the brazen selves we used to be. When you’re experiencing an attack of self-doubt, I encourage you to imagine you’re 8 years old—your 8-year-old self—doing the same task. How would that kid go about it?

 

Sure, in the years since then we've all had a few scrapes and bumps.

 

But that doesn't mean we should stop imagining what it would be like to fly.

Are configuration templates still needed with an automation framework? Or does the automation policy make them a relic of the past?

 

Traditional Configuration Management: Templates

 

We've all been there. We always keep a backup copy of a device's configuration, not just for recovery, but to use as a baseline for configuring similar devices in the future. We usually strip out all of the device-specific information and create a generic template so that we're not wasting time removing the previous device's details. Sometimes we'll go so far as to embed variables so that we can use a script to create new configurations from the template and speed things up, but the principle remains the same: We have a configuration file for each device, based on the template of the day. The principle has worked for years, only complicated by that last bit. When we change the template, it's a fair bit of work to regenerate the initial configurations to comply, especially if changes have been made that aren't covered by the template. Which almost always happens because we're manually making changes to devices as new requirements surface.

 

Modern Configuration Management: Automation Frameworks

 

Automation (ideally) moves us away from manual changes to devices. There's no more making ad hoc changes to individual devices and hoping that someone documents it sufficiently to incorporate it into the templates. We incorporate new requirements into the automation policy and let the framework handle it. One of those requirements is usually going to be a periodic backup of device configurations so that a current copy is available to provision new devices, which amounts to using automation to create static configurations.

 

One Foot Forward and One Foot Behind

 

Templates and the custom configuration files built from them are almost always meant to serve as initial configurations. Once they're deployed and the device is running, their role is finished until the device is replaced or otherwise needs to be configured from scratch.

 

The automation framework, on the other hand, plays an active role in the operation of the network while each device is running. Until the devices are online and participating in the network, automation can't really touch them directly.

 

This has led to common practice where both approaches are in play. Templates are built for initial configuration and automation is used for ongoing application of policy.

 

Basic Templates via Automation Framework

 

Most organizations I've worked with keep their templates and initial device configurations managed via some sort of version control system. Some will even go so far as to have their device configurations populated to a TFTP server for new systems to be able to boot directly from the network. No matter how far we take it, this is an ideal application for automation, too.

 

We can use automation to apply policies to templates or configuration files in a version control system or TFTP repository just as easily (possibly even more so) as we can to a live device. This doesn't need to be complex. Creating initial configuration files using the automation framework so that new devices are reachable and identifiable is sufficient. Once the device is up and running, the policy is going to be applied immediately, so there's no need to have more than the basics in the initial configuration.

 

The Whisper in the Wires

 

The more I work with automation frameworks, the more I'm convinced that all configuration management should be incorporated into the policy. Yes, some basic configurations may need to be pre-loaded to make the device accessible, but why maintain two separate mechanisms? The basic configurations can be managed just as easily as the functional devices can, so it's just an extra step. Is this something that those of you who are automating have already done, or are most of us still using one process for initial configuration and another for ongoing operation?

Where are you? Halfway through this 6-part series exploring a new reference model for IT infrastructure security!

 

As you learned in earlier posts, this model breaks the security infrastructure landscape into four domains that each contain six categories. While today’s domain may seem simple, it is an area that I constantly see folks getting wrong--both in my clients and in the news. So, let’s carefully review the components that make up a comprehensive identity and access security system:

 

DOMAIN: IDENTITY & ACCESS

Your castle walls are no use if the attacking hoard has keys to the gate. In IT infrastructure, those keys are user credentials. Most of the recent high-profile breaches in the news were simple cases of compromised passwords. We can do better, and the tools in this domain can help.

 

The categories in the identity and access domain are; single sign-on (SSO – also called identity and access management, IAM), privileged account management (PAM), multi-factor authentication (MFA), cloud access security brokers (CASB), secure access (user VPN), and network access control (NAC).

 

CATEGORY: SSO (IAM)

The weakest link in almost every organization’s security posture is its users. One of the hardest things for users to do (apparently) is manage passwords for multiple devices, applications, and services. What if you could make it easier for them by letting them log in once, and get access to everything they need? You can! It’s called single sign-on (SSO) and a good solution comes with additional authentication, authorization, accounting, and auditing (AAAA) features that aren’t possible without such a system – that’s IAM.

 

CATEGORY: PAM

Not all users are created equal. A privileged user is one who has administrative or root access to critical systems. Privileged account management (PAM) solutions provide the tools you need to secure critical assets while allowing needed access and maintaining compliance. Current PAM solutions follow “least access required” guidelines and adhere to separation-of-responsibilities best practices.

 

CATEGORY: MFA

Even strong passwords can be stolen. Multi-factor authentication (MFA) is the answer. MFA solutions combine any of the following: something you know (the password), something you have (a token, smart phone, etc.), something you are (biometrics, enrolled device, etc), and/or somewhere you are (geolocation) for a much higher level of security. Governing security controls, such as PCI-DSS, and industry best practices require MFA to be in place for user access.

 

CATEGORY: CASB

According to Gartner: “Cloud access security brokers (CASBs) are on-premises, or cloud-based security policy enforcement points, placed between cloud service consumers and cloud service providers to combine and interject enterprise security policies as the cloud-based resources are accessed. CASBs consolidate multiple types of security policy enforcement. Example security policies include authentication, single sign-on, authorization, credential mapping, device profiling, encryption, tokenization, logging, alerting, malware detection/prevention, and so on.” If you are using multiple SaaS/PaaS/IaaS offerings, you should probably consider a CASB.

 

CATEGORY: SECURE ACCESS (VPN)

Your employees expect to work from anywhere. You expect your corporate resources to remain secure. How do we do both? With secure access. Common components of a Secure Access solution include a VPN concentrator and a client (or web portal) for each user. Worth noting, the new category of software defined perimeter (SDP) services mentioned in part 2 often look and act a lot like an always-on VPN. In any case, the products in this category ensure that users can securely connect to the resources they need, even when they’re not in the office.

 

CATEGORY: NAC

Let’s say a criminal or a spy is able to get into your office. Can they join the Wi-Fi or plug into an open jack and get access to all of your applications and data? Less nefarious, what if a user computer hasn’t completed a successful security scan in over a week? Network access control (NAC) makes sure the bad guys can’t get onto the network and that the security posture of devices permitted on the network is maintained. Those users or devices that don’t adhere to NAC policies are either blocked or quarantined via rules an administrator configures. Secure access and NAC are converging, but it’s too early for us to collapse the categories just yet.

 

ONE MORE DOMAIN!

While we’ve made a lot of progress, our journey through the domains of IT infrastructure security isn’t over yet. In the next post, we’ll peer into the tools and technologies that provide us with visibility and control. Even that isn’t the end though, as we’ll wrap the series up with a final post covering the model as a whole, including how to apply it and where it may be lacking. I hope you’ll continue to travel along with me!

This is part three in this series. In part one and part two, we covered some of the basics. But in this post, we will dig into the benefits of application performance monitoring (APM), look at a few examples, and begin looking at what APM in an Agile environment means.

Benefits

With everything we have discussed thus far, it may or may not be apparent on what the benefits of APM might be. Hopefully they are obvious, but in case the benefits are not clear, we will discuss the benefits briefly.

 

Based on some of the comments for the previous posts, it seems that there is a common theme: it is not so easy to accomplish. This tends to justify why many choose to either not start or quit trying when looking at APM.

 

I would personally agree that it is not an easy feat to accomplish, but it is very much beneficial to stick with it. There will be pain and suffering along the way, but in the end, your application's performance will be substantially more satisfying for everyone. Along the way you may even uncover some underlying infrastructure issues that may have gone unnoticed but will become more apparent as APM is implemented. So, in regards to the greatest benefit, I would say it's the fact that you were able to follow through on your APM implementation.

Examples

Let's now look at just a few examples of where APM would be beneficial in identifying a true performance degradation.

 

Users are reporting that when visiting your company’s website, there is a high degree of slowness or timeouts. I am sure that this scenario rings a bell with most. This is more than likely the only information that you have been provided as well. So where do we start looking? I bet most will begin looking at CPU, memory, and disk utilization. Why not? This seems logical, except that you do not see anything that appears to be an issue. But because we have implemented our APM solution, in this scenario we were able to identify that the issue was due to a bad SQL query in our database. Our APM solution was able to identify it, show us where the issue lies, and give us some recommendations on how to resolve the issue.

 

Now, let us assume that we were getting reports of slowness once again on our company’s website. But this time our application servers appear to be fine and our APM solution is not showing us anything regarding performance degradation. So, we respond with the typical answer, “Everything looks fine.”

 

A bit later, one of your network engineers reports that there is a high amount of traffic hitting the company’s load balancers. A DDoS attack is causing them to drop packets to anything behind them. And guess what? Your applications web servers are directly in line with the affected load balancers. Which would explain the reports that you received earlier. In this case, we did not have APM for our application configured to monitor anything else other than our application servers, so we never saw anything out of the norm. This is a good example of not only monitoring your application servers, but also all the external components that are in some way related to what performance is experienced with your application. If we had been doing so, we at the very least could have been able to correlate the reports of slowness with the high amount of traffic affecting the load balancers. In addition to this, our APM was not configured to monitor the connection metrics on our application servers. If we had, we should have been able to notice that our connection metrics were not reporting as normal.

Conclusion of After The Fact Monitoring

If you recall, I mentioned in the first article that I would reference the traditional APM methodology as “After the fact implementation.” This is more than likely the most typical scenario, which also leads to the burden and pain of implementation. In the next post of this series, we will begin looking at implementing APM in an Agile environment.

This version of the Actuator comes to you from Barcelona, where I am attending VMworld. This is one of my favorite events of the year. And I’m not just saying that because in Barcelona I can buy a plate of meat and it’s called “salad.” OK, maybe I am.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

The Cybersecurity Hiring Gap is Due to The Lack of Entry-level Positions

Yes, the hiring process is broken. This post helps break it down. You could replace cybersecurity with other tech roles, and see the issues exist everywhere.

 

Directions

Brilliant post from Leon where he helps break down a better way to conduct an interview. This is something near and dear to my heart, having written more than a few times about DBA jobs and interviews.

 

That sign telling you how fast you’re driving may be spying on you

It’s one thing to collect the data, it’s another to use the data. I think the collection is fine, but you need a warrant to search that database. And this is also a case where you can’t allow someone to be given SysAdmin access “just because.”

 

Who Is Agent Tesla?

Is it a monitoring agent? Is it malware? Why can’t it be both? Folks, if your “monitoring software” is asking for payment in Bitcoin, then you are asking for trouble when you install.

 

Lyft speeds ahead with its autonomous initiatives

Because I haven’t been including enough autonomous car stories lately, I felt the need to share this one. And when I am at AWS re:Invent later this month, I hope to use one.

 

Inside Europe’s quest to build an unhackable quantum internet

I don’t know why, but I’m more bullish about quantum computing than I am Blockchain. The quantum internet sounds cool, but the reality is most data breaches happen when Adam in Accounting leaves a laptop on a bus.

 

Apple Reportedly Blocked Police iPhone Hacking Tool and Nobody Knows How

Score one for the good guys! Wait. When did Apple become the good guys again?

 

I love the salad bars here in Barcelona:

 

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Agencies are becoming far more proactive in their efforts to combat threats, as evidenced by the Department of Defense’s Comply-to-Connect and the Department of Homeland Security’s Continuous Diagnostics and Mitigation programs. To develop and maintain strong security hygiene that supports these and other efforts, agencies should consider implementing five actions that can help strengthen networks before the next attack.

 

Identify and dispel vulnerabilities

 

Better visibility and understanding of network devices are key to optimal cybersecurity. Agencies should maintain device whitelists or known asset inventories and compare the devices that are detected to those databases. Then, they can make decisions based on their whitelist.

 

Identifying vulnerable assets and updating them will likely be more cost effective—and safer—than trying to maintain older systems.

 

Update and test security procedures

 

Many agencies engage in large-scale drills, but it’s equally important to test capabilities on a smaller scale and monitor performance under simulated attacks. Agencies must get into the habit of testing every time a new technology is added to the network, or each time a new patch is implemented. Likewise, teams should update and test their security plans and strategies frequently. In short: verify, then trust. An untested disaster recovery plan is a disaster waiting to happen.

 

Make education a priority

 

A significant number of IT professionals feel that agencies are not investing enough in employee training. Lack of training could pose risks if IT professionals are not appropriately knowledgeable on technologies and mitigation strategies that can help protect their organizations.

 

Agencies must also invest in ongoing user training, so their teams can be more effective. This includes solution training, but it may also encompass sessions that focus on the latest malware threats, hacker tactics, or the potential dangers posed by insiders.

 

Take a holistic view of everyone’s roles

 

It’s good that the government is focused on hiring highly-skilled cybersecurity professionals. Last year the General Services Administration held a first-ever event to recruit new cybersecurity talent, and we will likely see similar job fairs in the future.

 

However, security is everyone’s job. Managers must institute a culture of information sharing amongst team members; there’s no room for silos in cybersecurity. Everyone must be vigilant and on the lookout for potential warning signs, regardless of their job descriptions.

 

Implement the proper procedures for a cyber assault

 

Still, threats will inevitably occur, and while there are a variety of mechanisms and techniques that can be used in response, all involve having the correct tools working in concert. For instance, a single next-generation firewall is great, but ineffective in the event of data exfiltration over domain name server traffic.

 

To help protect critical services, agencies must employ a suite of solutions that can accurately detect anomalies that originate both inside and outside the network. These should include standard network monitoring and firewall solutions. Agencies may also want to consider implementing automated patch management, user device tracking, and other strategies that can provide true defense-in-depth capabilities.

 

Find the full article on SIGNAL.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Despite all the talk of all our jobs being replaced by automation, my experience is that the majority of enterprises are still very much employing engineers to design, build, and operate the infrastructure. Why, then, are most of us stuck in a position where despite experimenting with infrastructure automation, we have only managed to build tools to take over small, mission-specific tasks, and we've not achieved the promised nirvana of Click Once To Deploy?

 

We Are Not Stupid

 

Before digging into some reasons we're in this situation, it's important to first address the elephant in the room, which is the idea that nobody is stupid enough to automate themselves out of a job. Uhh, sure we are. We're geeks, we're driven by an obsession with technology, and the vast majority of us suffer from a terrible case of Computer Disease. I believe that the technical satisfaction of managing to successfully automate an entire process is a far stronger short-term motivation than any fear of the potential long-term consequences of doing so. In the same way that hoarding information as a form of job security is a self-defeating action (as Greg Ferro correctly says, "if you can't be replaced, you can't be promoted"), avoiding automation because it takes a task away from the meatbags is an equally silly idea.

 

Time Is Money

 

Why do we want to automate? Well, automation is the path to money. Automation leads to time-saving; time-saving leads to agility; agility leads to money. Save time, you must! Every trivial task that can be accomplished by automation frees up time for more important things. Let's be honest, we all have a huge backlog of things we've been meaning to do, but don't have time to get to.

 

However, building automation takes time too. There can be many nuances to even simple tasks, and codifying those nuances and handling exceptions can be a significant effort, and large scale automation is exponentially more complex. Because of that, we start small, and try to automate small steps within the larger task because that's a manageable project. Soon enough, there will be a collection of small, automated tasks built up, each of which requires its own inputs and generates its own outputs, and--usually--none of which can talk to each other because each element was written independently. Even so, this is not a bad approach, because if the tasks chosen for automation occur frequently, the time saved by the automation can outweigh the time spent developing it.

 

This kind of automation still needs hand-holding and guidance from a human, so while the humans are now more productive, they haven't replaced themselves yet.

 

Resource Crunch

 

There's an oft-cited problem that infrastructure engineers don't understand programming and programmers don't understand infrastructure, and there's more than a grain of truth to this idea. Automating small tasks is something that many infrastructure engineers will be capable of, courtesy of some great module/package support in scripting languages like Python. Automating big tasks end-to-end is a different ball game, and typically requires a level of planning and structure in the code exceeding that which most infrastructure engineers have in their skills portfolio. That's to be expected: if coding was an engineer's primary skill, they'd more likely be a programmer, not an infrastructure engineer.

 

Ultimately, scaling an automation project will almost always require dedicated and skilled programmers, who are not usually found in the existing team, and that means spending money on those programming resources, potentially for an extended period of time. While the project is running, it's likely that there will be little to no return on the investment. This is a classic demonstration of the maxim that you have to speculate to accumulate, but many companies are not in a position--or are simply unwilling--to invest that money up front.

 

The Cliffs Of Despair

 

With this in mind, in my opinion, one of the reasons companies get stuck with lots of small automation is that it's relatively easy to automate multiple, small tasks, but taking the next step and automating a full end-to-end process is a step too far for many companies. It's simply too great a conceptual and/or financial leap from where things are today. Automating every task is somewhere so far off in the distance, nobody can even forecast it.

 

They say a picture is worth a thousand words, which probably means I should have just posted this chart and said "Discuss," but nonetheless, as a huge fan of analyst firms, I thought that I could really drive my point home by creating a top quality chart representing the ups and downs of infrastructure automation.

 

The Cliffs Of Despair

 

As is clearly illustrated here, after the Initial Learning Pains, we fall into the Trough Of Small Successes, where there's enough knowledge now to create many, small automation tools. However, the Cliffs Of Despair loom ahead as it becomes necessary to integrate these tools together and orchestrate larger flows. Finally–and after much effort–a mechanism emerges by which the automation flows can be integrated, and the automation project enters the Plateau of Near Completion where the new mechanism is applied to the many smaller tools and good progress is made towards the end goal of full automation. However, just as the project manager announces that there are only a few remaining tasks before the project can be considered a wrap, development enters the Asymptotic Dream Of Full Automation, whereby no matter how close the team gets to achieving full automation, there's always just one more feature to include, one more edge case that hadn't arisen before, or one more device OS update which breaks the existing automation, thereby ensuring that the programming team has a job for life and will never achieve the sweet satisfaction of knowing that the job is finished.

 

Single Threaded Operation

 

There's one more problem to consider. Within the overall infrastructure, each resource area (e.g., compute, storage, network, security) is likely working their own way towards the Asymptotic Dream Of Full Automation and at some point will discover that full, end-to-end automation means orchestrating tasks between teams. And that's a whole new discussion, perhaps for a later post.

 

Change My Mind

 

Change My Mind

The hype around blockchain technology is reaching a fever pitch these days. Visit any tech conference and you’ll find more than a handful of vendors offering blockchain in one form or another. This includes Microsoft, IBM, and AWS. Each of those companies offers a public blockchain as a service.

 

Blockchain is also the driving force behind cryptocurrencies, allowing Bitcoin owners to purchase drugs on the internet without the hassle of showing their identity. So, if that sounds like you, then yes, you should consider using blockchain. A private one, too.

 

Or, if you’re running a large logistics company with one or more supply chains made up of many different vendors, and need to identify, track, trace, or source the items in the supply chain, then blockchain may be the solution for you as well.

 

Not every company has such needs. In fact, there’s a good chance you are being persuaded to use blockchain as a solution to a current logistics problem. It wouldn’t be the first time someone has tried to sell you a piece of technology software you don’t need.

 

Before we can answer the question if you need a blockchain, let’s take a step back and make certain we understand blockchain technology, what it solves, and the issues involved.

 

What is a blockchain?

The simplest explanation is a blockchain serves as a ledger, a series of transactions using cryptography to verify each transaction in the chain. Think of it as a very long sequence of small files, each file based upon a hash value of the previous file combined with new bits of data and the answer to a math problem.

 

Put another way, blockchain is a database—one that is never backed up, grows forever, and takes minutes or hours to update a record. Sounds amazing!

 

What does blockchain solve?

Proponents of blockchain believe it solves the issue of data validation and trust. For systems needing to verify transactions between two parties, you would consider blockchain. The leading problem blockchain is applied towards is supply chain logistics. Specifically, food sourcing and traceability.

 

Examples include Walmart requiring food suppliers to use a blockchain provided by IBM starting in 2019. Another is Albert Heijn using blockchain technology along with the use of QR codes to solve issues with orange juice. Don’t get me started on the use of QR codes; we can save it for a future post.

 

The problem with blockchain

Blockchain is supposed to make your system more trustworthy, but it does the opposite. Blockchain pushes the burden of trust down to the individuals adding transactions to the blockchain. This is how all distributed systems work. The burden of trust goes from a central entity to all participants. And this is the inherent problem with blockchain. What’s worth mentioning here is how many cryptocurrencies rely on trusted third parties to handle payouts. So, they use blockchain to generate coins, but don’t use blockchain to handle payouts because of the issues involved around trust.

 

Here’s another example of an issue with blockchain: data entry. In 2006, Walmart launched a system to help track bananas and mangoes from field to store, only to abandon the system a few years later. The reason? Because it was difficult to get everyone to enter their data. Even when data is entered, blockchain will not do anything to validate that the data is correct. Blockchain will validate the transaction took place but does nothing to validate the actions of the entities involved. For example, a farmer could spray pesticides on oranges but still call it organic. It’s no different than how I refuse to put my correct cell phone number into any form on the internet which requires a number be given.

 

In other words, blockchain, like any other database, is only as good as the data entered. Each point in the ledger is a point of failure. Your orange, or your ground beef, may be locally sourced, but that doesn’t mean it’s safe. Blockchain may help determine the point of contamination faster, but it won’t stop it from happening.

 

Do you need a blockchain?

Maybe. All we need to do is ask ourselves a few questions.

 

Do you need a [new] database? If you need a new database, then you might need a blockchain. If an existing database or database technology would solve your issue, then no, you don’t.

 

Let’s assume you need a database. The next question: Do you have multiple entities needing to update the database? If no, then you don’t need a blockchain.

 

OK, let’s assume we need a new database and we have many entities needing to write to the database. Are all the entities involved known, and trust each other? If the answer is no, then you don’t need a blockchain. If the entities have a third party everyone can trust, then you also don’t need a blockchain. A blockchain should remove the use of a third party.

 

OK, let’s assume we know we need a database, with multiple entities updating it, all trusting each other. The final question: Do you need this database to be distributed in a peer-to-peer network? If the answer is no, then you don’t need a blockchain.

 

If you have different answers, then a private or public blockchain may be the right solution for your needs.

Summary

No, you don’t need a blockchain. Unless you do need one, but that’s not likely. And it won’t solve basic issues of data validation and trust between entities.

So far in this series we have reviewed a few popular and emerging models and frameworks. These tools are meant to help you make sense of where you are and how to get where you’re going when it comes to information security or cybersecurity. We’ve also started the process of defining a new, more practical, more technology-focused map of the cybersecurity landscape. At this point you are familiar with the concept of four critical domains, and six key technology categories within each. Today we’ll dive into the second domain: Endpoint and Application.

 

I must admit that not everyone agrees with me about lumping servers and applications in with laptops and mobile phones as a security domain. I admit that the choice was a risk, but I believe it makes the most sense. So many of the tools and techniques are the same for both groups of devices. Especially now, as we move our endpoints out onto networks that we don’t fully control (or control at all in some cases). Let’s explore it together - and then let me know what you think!

 

Domain: Endpoint & Application

If we stick with the castle analogy from part 2, endpoints and applications are the people living inside the walls. Endpoints are the devices your people use to work: desktops, laptops, tablets, phones, etc. Applications are made up of the servers and software your employees, customers, and partners rely upon. These are the things that are affected if an attack penetrates your perimeter, and as such, they need their own defenses.

 

The categories in the endpoint and application domain are endpoint protection, detection, and response (EPP / EDR), patch and vulnerability management, encryption, secure application delivery, mobile device management (MDM), and cloud governance.

 

Category: EPP / EDR

The oldest forms of IT security are firewalls and host antivirus. Both have matured a lot in the past 30+ years. Endpoint protection (EPP) is the evolution of host based anti-malware tools, combining many features into products with great success rates. Nothing is perfect, however, and there are advanced persistent threats (APT) that can get into your devices and do damage over time. Endpoint detection and response (EDR) tools are the answer to APT. We're combining these two concepts into a single category because you need both – and luckily for us, many manufacturers now combine them as features of their endpoint security solution.

 

Category: Patch and Vulnerability Management

While catching and stopping malware and other attacks is great, what if you didn’t have to? Tracking potential vulnerabilities across your systems and automatically applying patches as needed should reduce the exploit capabilities of an attacker and help you sleep better at night. While you can address patch management without vulnerability management, I recommend that you take a comprehensive and automated approach, which is why they are both covered in this category.

 

Category: Encryption

When properly applied, encryption is the most effective way to protect your data from unwanted disclosure. Of course, encrypted data is only useful if you can decrypt it when needed – be sure to have a plan (and the proper tools) for extraction! Encryption/decryption utilities can protect data at rest (stored files), data in use (an open file), and data in motion (sending/receiving a file).

 

Category: Secure Application Delivery

Load balancers used to be all you needed to round-robin requests to your various application servers. Today application delivery controllers (ADC) are much more than that. You always want to put security first, so I recommend an ADC that includes web application firewall (WAF) and other security features for secure application delivery.

 

Category: Mobile Device Management

EPP and EDR may be enough for devices that stay on-prem, under the protection of your perimeter security tools, but what about mobile devices? When people are bringing their own devices into your network, and taking your devices onto other networks, a more comprehensive security-focused solution is needed. These solutions fall under the umbrella of mobile device management (MDM). 

 

Category: Cloud (XaaS) Governance

Cloud Governance is a fairly emergent realm and in many ways is still being defined. What’s more is that to an even higher degree than the other categories here, governance must always include people, processes, and technology. Since this reference model is focused on technology and practical tools, this category includes technologies that enable and enforce governance.  As your organization becomes more and more dependent on more and more cloud platforms, you need visibility and policy control over that emerging multi-cloud environment. A solid cloud governance tool provides that.

What's Next?

We are now three parts into this six-part series. Are you starting to feel like you know where you are? How about where you need to be going? Don’t worry, we still have two more domains to cover, and then a final word on how to make this model practical for you and your organization. Keep an eye out for part 4, where we’ll dive into identity and access - an area that many of you are probably neglecting, despite its extreme importance. Talk to you then!

What Should We Be Monitoring?

 

To effectively begin getting a grasp on your applications performance, you must begin mapping out all the components in the path of your application. It might be wise to begin with the server or servers where your application lives. Again, this could be multiple different scenarios depending on the architecture housing your application. Or it could easily be a mixture of different architectures.

 

Let's say your application is a typical three-tier app that runs on several virtual servers that run on top of some hypervisor. You would want to start collecting logs and metrics from the hypervisor platform such as CPU, memory, network, and storage. You would also want to collect these metrics from the virtual server. Obviously, in addition to the metrics being collected from the virtual server, your application logs and metrics would be crucial to the overall picture as well. If, for some reason, your application does not provide these capabilities, you will need to either develop them or rely on some other existing tooling that you could easily integrate into your application. These metrics are crucial in identifying potential bottlenecks, failures, and overall health of your application. In addition to what we have already identified, we also need to collect metrics from any load balancers, databases, and caching layers. Obtaining all these metrics and aggregating them for deep analysis and correlation gives us the overall view into how our application is stitched together and assists us in pinpointing where a potential issue might arise.

How Should We Be Monitoring?

 

We have now identified a few of the things we should be monitoring. What we need to figure out next is how will we begin monitoring these things and ensure that they are accurate as well as providing us with some valuable telemetry data. Telemetry data comes from logs, metrics, and events.

Logging (Syslog)

 

First, let us begin with logging. We should have a centralized logging solution that can not only receive our log messages, but also have the ability to aggregate and correlate events. In addition, our logging solution should provide us with the ability to view graphs and customized dashboards, and also provide us with some level of alerting capabilities. If we have these initial requirements available to us from our logging solution, we are already off to a good beginning.

 

Now we need to begin configuring all our hypervisors, servers, load balancers, firewalls, caching layers, application servers, database servers, etc. There are many, many systems to ensure we are collecting logs from. But we need to make sure we get everything that is directly in the path of our application configured for logging. Also, remember that your application logs are important to be collected as well. With all these different components configured to send logging data, we should begin seeing events over time, painting a picture of what we might determine as normal. But remember, this might be what we termed after-the-fact application monitoring.

Metrics

 

There are numerous different methods of obtaining metrics. We should be clear about one thing when we begin discussing these methods, and that would be not using SNMP polling data. Now don't get me wrong--SNMP polling data is better than nothing at all. However, there are much better sources of metric data.

 

Our performance metrics should be time series-based. Time series-based metrics are streamed to our centralized metrics collection solution. With time series-based metrics we can drill into a performance graph at a very fine level of detail.

 

Most time series-based metrics require an agent of some sort on the device that we would like to collect metrics from that is responsible for providing the metrics that we are interested in. The metrics we are interested in include those that were mentioned before: CPU, memory, network, disk, etc. However, we are interested in obtaining metrics for our application stack as well. These metrics would include application-related metrics such as response latency, searches, database queries, user patterns, etc. We need all of these metrics to visually see what the environment looks like, including the health and performance.

 

With metrics and logging in place, we can begin correlating application performance with the events/logs to start understanding what the underlying issue might be when our application performance is degraded.

Welcome to the Halloween Edition of the Actuator! It’s the Halloween Edition because of the date, not because the stories are scary. Although you may be concerned that the data is coming from inside the house.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Red Hat Cloud Prowess Drives $33 Billion IBM Deal

You want scary? Here’s IBM rising from the grave to buy the largest open source company in the world. Red Hat was set to make $3 billion this year selling software that runs on top of Linux, which is free. But hey, as long as they aren’t Microsoft, it’s OK for someone to profit, right?

 

Feds Say Hacking DRM to Fix Your Electronics Is Legal

In other news, our federal government made a decision that favored consumers. Yes, I’m scared, too.

 

"Smart home" companies refuse to say whether law enforcement is using your gadgets to spy on you

[NARRATOR VOICE]: They are totally spying on you.

 

Tim Cook calls for strong US privacy law, rips “data-industrial complex”

"Profits over privacy," well stated and I agree. We need better laws here. Sadly, our elected officials lack the necessary experience to make this happen.

 

Hubble Telescope’s Broken Gyroscope Seemingly Fixed After Engineers Try Turning It Off and On Again

Houston, have you tried turning it off and back on again?

 

GM’s data mining is just the beginning of the in-car advertising blitz

If you thought the ads at the gas pump, or inside taxis in Vegas and NYC were horrible, just wait until you see them in your own car.

 

Minds, the blockchain-based social network, grabs a $6M Series A

Yet another waste of time and money on an idea with Blockchain in the title. If this company is worth $6 million in funding, my Bacon Blockchain idea is worth at least $600 million in the first round.

 

As I was saying:

 

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.