Skip navigation
1 6 7 8 9 10 Previous Next

Geek Speak

1,838 posts

The Actuator - April 27th

Posted by sqlrockstar Employee Apr 27, 2016

Things I find amusing from around the Internet…


2015’s MVPs – The Most Vulnerable Players

For all the Microsoft® haters out there who believe Windows® is insecure when compared to other platforms, here is some data to help you get a feel for the truth: everything is broken.


The Costs of Running SQL Server on Linux

Yes, SQL Server® costs a bit of money, but with the move to Linux®, I wouldn’t be surprised to see the costs come down in favor of a subscription-based model. 


No Cybersecurity Training Required at Top U.S. Universities

Well, that might explain why there seems to be a lack of such experience among members of Congress, too. 


Titanic Sinking in Real-Time

For me, the best part of this video was the absence of Celine Dion songs.


Boaty McBoatface and the False Premise of Democracy

This is why we can't have nice things. Also, this is a great reminder at why crowdsourcing is often a horrible idea. None of us is as dumb as all of us.


Bigger Underwater Data Center in the Works

The future of the data center could mean lots of extra time spent on a beach for many of us. I like where this is headed.


How Much Would Darth Vader's Suit Cost?

Pretty sure the cost of the suit was covered through UnitedEmpire HMO premiums, but interesting to note the costs. I bet Elon Musk could build this for 1/10 the cost.


RDBMS Genealogy

For those of us who like to remember the good old days.



Regardless of which new technologies federal network administrators adopt, they will always need dependable, consistent, and highly available solutions that keep their networks running -- constantly.


Sadly, that’s not always the reality.


Last year's survey of federal IT professionals by my company, SolarWinds, indicated that performance and availability issues continue to plague federal IT managers. More than 90 percent of survey respondents claimed that end-users in their organizations were negatively impacted by a performance or availability issue with business-critical technology over the past year, and nearly 30 percent of respondents claimed these issues occurred at least six times.


What can IT pros do about this?




Don’t worry about deploying everything in one fell swoop. Instead, take a piecemeal approach. Focus on a single implementation and make sure that particular piece of technology absolutely shines. The trick to this strategy is keeping the big picture in mind as the individual pieces of technology are deployed.




Network monitoring is a must. To do it properly, start with a baseline diagnostic that assesses the overall network performance, including availability and average response times. Once this baseline is established, look for anomalies, including configuration changes that other users may have made to the network. Find the changes, identify who made them, and factor their impact into the performance data as you identify problems and keep the network running.




Make no mistake: errors will happen, and it’s important to have a plan in place when things go south. That plan should be comprised of three facets: technology, people, and process.


First, a well-defined technology plan outlines how to best handle the different components of the network infrastructure, including monitoring and building in redundancies. That means having a backup for equipment that’s core to an agency’s network traffic.


Second, make sure the IT staff includes several people who share the same skillset and expertise. What happens if a key resource is out sick or leaves the organization? All of that expertise is gone, leaving a very big knowledge gap that will be hard to fill.


Third, develop a process that allows for rollbacks to prior configurations. That’s an important failsafe in case of a serious network error.




IT professionals need to understand organizational objectives to accomplish their own goals, which include optimizing and securing a consistently dependable network. Doing that is not just about technology. It also requires the ability to communicate freely with colleagues and agency leadership so that everyone is working toward the same goals.


CIOs must build a culture that is barrier-free and allows for regular interaction with other business leaders outside the technology realm. After all, isn’t that network or database that the IT staff manages directly tied to agency performance?


Having everything run perfectly all the time is an impossible dream. However, six nine’s of uptime is certainly achievable. All it takes is a little bit of simplification and planning, and a whole lot of technology and teamwork.


Find the full article on GNC.


Interested in this year’s cyber security survey? Go here.

The Root cause of the Silo thinking

I had to face the silo problem many times in the past. In IT many organizations work in silos. For example, the Server department only cares about the problems concerning Servers. When a Ticket with a problem

arrives often it starts a process that I call: "Ticket Ping Pong" Instead of solving the problem it is easier to forward the ticket to a different IT department and let them take care of it.

Some User Help Desks assign all tickets to the Networking Team because "It´s always the networks fault “. With that mindset people working in the UHD are in the position of proving that your system is not

responsible for causing the problem. But that isn´t the best approach in many cases. Problems could be solved quicker and more effectively if everybody would work together.


One Monitoring to rule them All

It is very common that every IT department has its own monitoring in place. Often a high specialized system that is directly coming from the hardware vendor. A shiny little box from the well trusted vendor they

have been using for ages. These systems have their benefits and are often a combination of management and monitoring. So for example for Server guys there are no problems unless they show up in

their monitoring systems. For a specific problem that is only related to one system that is working. But in the real world you are facing often more complex problems that are related to multiple systems. You need a

monitoring that covers all the complexity in your infrastructure and can handle different device types and services. The highly specialized vendor specific monitoring can coexist with that. But all the IT departments have to

break up the silos and start to work together. The general thinking should be we are all in the same boat. A monitoring project can build bridges and bring the different IT departments closer together.

The goal should be to have all systems in the environment on the same monitoring at the end of the day. That creates visibility and trust. When everybody is looking at the same monitoring they share the same knowledge of

what is going on when a ticket shows up. When the server admin sees in one pane of glass that the Firewall is running on 100% CPU utilization he knows how to address the ticket and that it is maybe a good idea to wait for a

feedback from the firewall guys.

In times of virtualization and SDN this is even more important. There are so many dependences between the different parts of the infrastructure that your initial entry point is hard to figure out. Sometimes the problem is hiding

behind the different layers of virtulization. It is a big effort to bring all the systems to centralized monitoring, but it absolutely worth the effort of doing it. At the end of the day all Software defined anything runs on hardware and

that hardware needs to be monitored.


OpenStack Summit 2016

Posted by kong.yang Employee Apr 22, 2016

This marks my first year attending OpenStack Summit. I am as giddy as I was when World of Warcraft first came out and everyone was racing to level 60. My hope is to learn and power level myself during my time at OpenStack. I will be leveraging my industry friends and the OpenStack community to fast track my working knowledge.




OpenStack Summit is first come, first serve for most of the sessions, and RSVP for working/hands-on training sessions. It puts the onus on you to prepare, prioritize, and decide. And, in some cases, it's a decision between SMEs, who are presenting AWESOMESAUCE material in separate sessions at the same time. All that really means is that I'll have to hustle to engage with all the SMEs in the areas that interest me, such as containers, automation and orchestration, microservices, etc. Nothing like getting the 411 directly from the source.


Next Friday I'll blog about my observations and takeaways from OpenStack Summit and offer some tips on using and contributing to the OpenStack Community. If you're attending as well, let's try to meet up. It's a small IT world, after all.


What OpenStack projects are you currently working with or thinking about incorporating into your existing environment? Which ones excite you and why? Please comment below.

Wendy Abbott

4/18 Seattle SWUG Recap

Posted by Wendy Abbott Administrator Apr 21, 2016

On Monday, we held our first Seattle #SWUG (SolarWinds User Group) at the Living Computer Museum.

thwacksters from the Pacific Northwest and as far as Missoula, Montana (shout out to edsando) gathered together and united in our geekiness and love of SolarWinds.




In an effort to keep the conversation going and provide you with the valuable resources that were mentioned at the SWUG, here’s a recap of what went down:


MC for this SWUG: adatole


‘Product Roadmap Discussion + Q & A’

cobrien  Sr. Product Manager (Networks)

patrick.hubbard, Head Geek (Systems)

stevenwhunt, Sr. Product Manager (Systems)

     *Not in attendance, but can help with any follow up questions regarding systems products.



Network products:


Systems products:


Additional Resources:


‘SolarWinds Best Practices & Thinking Outside the Box’

Dez, Technical Product Manager

KMSigma, Product Manager


For this session we asked attendees to vote on what they wanted to talk about. The top voted topic was “Alerting best practices: Leveraging Custom properties to reduce Alert Noise”. They talked about Muting alerts so you’re still getting the statistics, but you’re not getting alerted to things that don’t require immediate attention.

N_Mute , I_Mute, etc.


They also talked about how you can utilize custom properties to group alerts or send alerts to only certain teams in your organization.


The other noteworthy topic they covered was “Optimizing the SolarWinds implementation: Building Orion Servers based on Role”.

KMSigma made it look easy with this diagram:

swug- optimizing server.png


For more tips on how to optimize Orion, check out this video: Optimizing SolarWinds Orion - YouTube


Additional resources:


‘Customer Spotlight - How University of Washington Integrated the Syslog, Trap and Alert Manager’

RichardLetts, Network Operations Center Manager, SolarWinds MVP


Richard shared some interesting facts about UW (most surprising is that their budget is several billion dollars) and gave some useful tips on traps & alerts, and how to get started with SQL.



Resource links:


‘Customer Spotlight - How Atmosera is Leveraging SolarWinds to Deliver a Superior Customer Experience’

byrona, Systems Engineer, SCP, SolarWinds MVP

jcheney, SVP Client Operations


Jared Cheney and Byron Anderson gave us an overview of Atmosera and their evolution as a company and discussed how SolarWinds helps meet their monitoring needs for Hybrid Cloud.


They also made the entire room jealous with their NOC views #NOCEnvy.

atmosera noc views.jpg


They’re using NPM, NCM, IPAM, SAM, LEM to support their customers every day and to ensure the health of their network and thousands of applications. Atmosera uses LEM with HIPAA/HITECH, PCI DSS 3.1, IRS1075, NIST 800-53, and several other compliance standards. They said LEM makes compliance audits a breeze with its secure collection and storage of logs, quick & easy reporting, and fast alert creation.


‘SolarWinds User Experience Dear New Orion User…’

kristin.bongiovanni, User Experience


In a nutshell, we want your feedback! Here’s how to get involved:


Thank you to everyone who attended! We really enjoyed meeting and speaking with each of you.


And last but not least, thank you to our sponsor and professional services partner, Loop1Systems (BillFitz_Loop1 and mrxinu) for booking the venue & hosting the happy hour!


If you left without filling out a survey, please help us out by telling us how we can make SWUGs even better>> 160418 Seattle SWUG Survey


If you’re in the Atlanta, GA area sign up for our upcoming SWUG>> June 9, 2016 - Atlanta, Georgia SWUG

TL'DR:  'Continuous Improvement' promotes a leaner cycle of data gathering and prioritized fixes which can deliver big benefits for Ops teams without large investments in organizational change.

People are thinking and talking about network automation more than ever before. There are a bewildering array of terms and acronyms bandied about. Whether people are speaking about DevOps or SDN, the central proposition is that you'll reach a nirvana of automation where the all the nasty grunt work is removed and our engineer time is spent, erm... engineering.

Yet many engineers and network managers are rejecting the notion of SDN and DevOps. These folk run real warts-and-all networks and are overwhelmed by the day to day firefighting, escalations, repetitive manual process, inter-departmental friction, etc. They can see the elegance and power of software defined networks and long for the smooth-running harmony of a DevOps environment. Most engineers can see the benefits of DevOps but see no path - they simply don't know how to get to the promised land.

Network equipment vendors purport to solve your management and stability problems by swapping your old equipment with newer SDN-capable equipment. Call me skeptical, but without better processes your shiny new equipment is just another system to automate and manage. I don't blame network equipment vendors for selling solutions, but it's unlikely their solution will solve your technical debt and stability issues. Until your operational and process problems are sufficiently well defined, you'll waste time and money hopelessly trying to find solutions.

DevOps is an IT philosophy that promises many benefits including, holistic IT, silo elimination, developers who are aware of operational realities, continuous integration, tighter processes, increased automation and so on. I'm a complete fan of the DevOps movement, but it requires nothing short of a cultural transformation, and that's kinda hard.

I propose that you start with 'Continuous Improvement', which is an extremely powerful component of the DevOps philosophy. You start by focusing your limited resources on high-leverage tasks that increase your view of the network. You gathering data and use it to identify your top pain point and you focus your efforts on eliminating your top pain point.  If you've chosen the top pain point, you should have enough extra hours to start the process again. In this virtuous circle scenarios you have something to show for every improvement cycle, and a network which become more stable as time passes.

Adopting 'Continuous Improvement' can deliver the fundamental benefits of DevOps without needing to bring in any other teams or engage in a transformation process.

Let's work through one 'Continuous Improvement' scenario:

  1. Harmonize SNMP and SSH access. The single biggest step you can take towards automation is to increase the visibility of your network devices. Inconsistent SNMP and SSH access to your network devices is one of the biggest barriers to visibility. Ensure you have correct and consistent SNMP configuration. Make sure you have TACACS or RADIUS authenticated SSH access from a common bastion. This can be tedious work, but it provides a massive return on investment. All of the other gains come from simplifying and harmonizing access to your network devices.
  2. Programmatically pull SNMP and Configurations. This step should be easy, just gather the running config and basic SNMP information for now. You can tune it all later.
  3. Analyze Analyze the configuration and SNMP data you gathered. Talk to your team and your customers about their pain points.
  4. Prioritize - Prioritize one high-leverage pain point that you can measure and improve. Don't pick the gnarliest problem. You should pick something that can be quickly resolved, but saves a lot of operational hours for your team. That is high-leverage.
  5. Eliminate the primary pain point Put one person on this task, make it a priority. You desperately need a quick win here.
  6. Celebrate Woot! This is what investment looks like. You spend some engineer hours but you got more engineer hours back.
  7. Tune up Identify the additional data would help make better decision for your next cycle and tune your management system to gather that extra data... then saddle up your horse and start again.


You don't need to overhaul your organization to see the benefits of DevOps. You can make a real difference to you teams operational load and happiness by focusing your limited resources on greater network visibility and intelligent prioritization. Once you see real results and buy yourself some breathing room, you'll be able to dive deeper into DevOps with a more stable network and an intimate knowledge of your requirements.


The Actuator - April 20th

Posted by sqlrockstar Employee Apr 20, 2016

Things I find amusing from around the Internet…


DARPA Challenge Targets The Electromagnetic Spectrum

Forget about IPv4 running out, where will we get more EM spectrum? Sounds a bit ominous from the title, but read through and then think about how DARPA will be using machine learning to find a way to reduce bandwidth bottlenecks.


Introducing Application Insights Analytics

Ignoring their horrible choice of pie charts the fact is this system “ingests over 1 trillion events and 600TB a day”. That’s impressive, I don’t care who you are. And it’s interesting to note the steps Microsoft® is taking in the field of application analytics.


Startup Uses Mathematical Verification For Network Security

This takes software-defined networking to a whole new level. I think it’s a great step forward for network security, but it still won’t prevent an employee from falling victim to a phishing scam, or social engineering in general. Data just has a way of escaping, no matter what.


U.S. Textile Industry Turns to Tech as Gateway to Revival

How soon before we can change the color of our clothes to match our mood? And then, how long before someone hacks my shirt?


Data USA makes government data easier to explore

Because if there is one thing our government specializes in, it's making things easy to understand. But hey, if you are looking for data sets to get started with exploring some analytics tools, this is a good place to go shopping.


The 8-Bit Game That Makes Statistics Addictive

As if I needed more reasons to get excited about statistics. Whut? Am I the only one enjoying this game? OK then.


Should I be upset that the NYT credulously reviewed a book promoting iffy science?

Yes, yes you should. We *all* should be upset. Unfortunately, this is the world in which we live, when NYT articles put opinion on display and don't back it up with facts. Then again, if we wanted the facts we'd never allow someone like Dr. Oz to be so popular.


I never worry about database design

And neither should you.



As government agencies shift their focus to virtualization, automation and orchestration, cloud computing, and IT-as-a-Service, those who were once comfortable in their position as jacks-of-all-IT-trades are being forced to choose a new career paths to remain relevant.


Today, there’s very little room for “IT generalists.” A generalist is a manager who possesses limited knowledge across many domains. They may know how to tackle basic network and server issues, but may not understand how to design and deploy virtualization, cloud, or similar solutions that are becoming increasingly important for federal agencies.


But IT generalists can grow their careers and stay relevant. That hope lies in choosing between two different career paths: that of the “IT versatilist” or “IT specialist.”


The IT Versatilist


An IT versatilist is someone who is fluent in multiple IT domains. Versatilists have broadened their knowledgebase to include a deep understanding of several of today’s most buzzed-about technologies. Versatilist can provide their agencies with the expertise needed to architect and deliver a virtualized network, cloud-based services, and more.


Versatilists also have the opportunity to have to help their agencies move forward by mapping out a future course based on their familiarity surrounding the deployment of innovative and flexible solutions. This strategic support enhances their value in the eyes of senior managers.


The IT Specialist


Like versatilists, IT specialists have become increasingly valuable to agencies looking for expertise in cutting edge technologies. However, specialists focus on a single IT discipline, such as a specific application. For example, a specialist might have a very deep grasp of security or storage, but not necessarily expertise in other adjacent areas.


Still, specialists have become highly sought-after in their own right. A person who’s fluent in an extremely important area, like network security, will find themselves in-demand by agencies starved for security experts. This type of focus can nicely complement the well-rounded aspect that versatilists bring to the table.


Where does that leave the IT generalist?


Put simply – on the endangered list.


The government is making a major push toward greater network automation. Yes, this helps takes some items off the plates of IT administrators – but it also minimizes the government’s reliance on human interference. Those who have traditionally been “keeping the lights on” might be considered replaceable commodities in this type of environment.


If you’re an IT generalist, you’ll want to expand your horizons to ensure that you have a deep knowledge and expertise of IT constructs in at least one relevant area. Relevant disciplines will most likely center on things like containers, virtualization, data analytics, OpenStack, and other new technologies.


Training on these solutions will become essential, and you may need to train yourself. Attend seminars or webinars, scour educational books and online resources, and lean on vendors to provide additional insight and background into particular products and services.


Whatever the means, generalists must become familiar with the technologies and methodologies that are driving federal IT forward. If they don’t, they risk getting left out of future plans.


Find the full article on our partner DLT’s blog, TechnicallySpeaking.


Databases, vSphere and vSAN

Posted by mbleib Apr 18, 2016

Recently, a customer who’s already got a vSAN implementation in place, but only for testing purposes, asked me about the potential hazards regarding virtualizing some of their databases onto vSAN. To be fair, the initial conversation began with simply using VMware as a basis for some of their mission critical databases. Now, I’ve always been a V1 (Virtualize First) proponent, and can remember going back to my early days at EMC, wherein I did a well received presentation on virtualizing mission critical apps onto VMware. However, I’ve also been a realist in knowing that not every application should be a viable candidate for virtualization.


There are many reasons in which some apps may simply not be appropriate candidates to be VM’s. However, these have historically been due to functional anomalies, like hardware insurmountables like dongles, Unix operating systems like IAX or Solaris, or things like licensing characteristics wherein, for example, Oracle might demand licensing every socket in an entire environment in order to locate an oracle infrastructure into even only one segregated cluster. In the latter case, monetarily, this proved to be prohibitive. Incidentally, it seems as if Oracle is loosening their stance on this policy, and offering the option for the customer to prove the isolation of that cluster so that only the possible sockets on the hosts to which the Oracle app might be moved to be those that get licensed, thus alleviating a large amount of the cost which made it not cost-effective to do so.


I’ve long felt that even a single VM that consumes an entire ESX host would be preferable to standing that same machine up on bare metal. Things like uptime, vMotion, VMware snapshotting, etc. add so much functionality on an architectural level that to me, as an administrator to the VMware infrastructure, that it still was logical to virtualize.


We have so much power available now to allocate to individual VM’s, that virtualizing a database, even a high-transactional database, is not only acceptable, but preferable.


Adding vSAN into the equation, particularly today, with all its newer specifications, seems to me to be again, simply the logical choice. Now that all the IO issues are addressable, due to the option of an all SSD vSAN implementations, even mission critical databases have the ability to be delivered all the disc based IO that they require, along with the extended functionality of disc redundancy across hosts. vSAN not only improves the benefits that vSphere brings to the equation, but also increases that availability to an extent that hadn’t really ever been available previously.


VMware has some really solid reference architecture on virtualizing Oracle: Here

And Microsoft SQL Server (Albeit on the preceding version of vSAN, v.6.1):



These are really excellent references for the design, of the environment, and also some fantastic design points to follow when implementing the databases themselves.


I recently had the benefit of a presentation in Palo Alto from the vSAN team, including the always entertaining, and hugely knowledgeable Rawlinson Rivera ( @PunchingClouds ) who briefed us in the Storage Field Day crew (#SFD9) on the benefits of vSAN, and the newest features on version 6.2. I would be happy to expound on these, but rather, would refer you to the following:

Here are all the presentations we received: Here


So, with that in mind, I have to say that to the question: Should I virtualize this database? The answer is Most Definitely.


Pardon the Interop-tion

Posted by kong.yang Employee Apr 15, 2016

I'm thrilled to be attending and engaging in Interop® 2016, which will be held at the Mandalay Bay Convention Center in Las Vegas from May 2-6. Even better, I'm honored to say I'll also be speaking at the event:


IT pros need to bridge any technology construct to business utility. Utility manifests as disruptive innovation that will add value to the business and is usually reflected in applications. To realize maximum value, the end game for IT operations is enabling continuous integration and delivery of services. There are four skills that any IT professional can learn and use to enable disruptive innovation without causing disruption. This session will introduce the DART skills: Discovery, Alerting, Remediation, and Troubleshooting.

Attendees will receive:

    • A walk-through of the DART framework to deal with DevOps, Infrastructure as code, and Hybrid IT.
    • Examples of the skills applied to real-world application scenarios.
    • Best practice tips and techniques to maximize each skill's utility to the business.


P.S. I’ll be there with my fellow Head Geek, adatole. If you’ll be at Interop Las Vegas, let us know and we can schedule a meet up.


Here's my Interop 2016 schedule:



Dark Reading Cyber Security Summit - Day 1

Monday, 8:30AM - 5:00PM

Speakers: Tim Wilson (Dark Reading), Michele Fincher (Social-Engineer)

Session Type: Summit

Track: Security

Hands-On Hacking




Hands-On Hacking

Monday, 8:30AM - 5:00PM

Speaker: John H. Sawyer (InGuardians)

Session Type: Workshop

Track: Security



Container Summit

Tuesday, 8:30AM - 5:00PM

Speakers: Bryan Cantrill, Jamie Dobson, Tianon Gravi, James Ford, Tom Jackson, Casey Bisson, Jason Mendenhall, Ken Owens, Victor Gajendran, Jane Arc, Paulo Pereira

Session Type: Summit

Track: Storage




Dark Reading Cyber Security Summit - Day 2

Tuesday, 8:30AM - 5:00PM

Speakers: Tim Wilson (Dark Reading), Stuart McClure (Cylance), Andy Jordan (Bishop Fox), Bhaskar Karambelkar (ThreatConnect), Gunter Ollmann (Vectra Networks), David Bradford (Advisen), Chris Scott (Crowdstrike)

Session Type: Summit

Track: Security



IDC at Interop Breakfast: Delivering Digital Transformation at Scale: Network Trends and Architectures

Wednesday, 7:15AM - 8:30AM

Session Type: Special Events


Hybrid Is the Assumption - So What's the Plan?

Wednesday, 11:45AM - 12:45PM

Speaker: Mark Thiele (Cloud Cruiser)

Session Type: Conference Session

Track: Cloud Connect


Trends in SaaS

Wednesday, May 4 | 2:15PM - 3:00PM

Speaker: Ryan Floyd (Storm Ventures)

Session Type: Conference Session

Track: Cloud Connect



            Containers, Orchestrators and Platforms: The Impact on Virtualization

Thursday, 10:30AM - 11:30AM

Speaker: Scott Lowe (VMware, Inc.)

Session Type: Conference Session

Track: Virtualization & Data Architecture


DART: An IT Skills Framework to Disrupt Without Disrupting

Thursday, 11:45AM - 12:45PM

Speaker: Kong Yang (SolarWinds)

Session Type: Conference Session



Interop is the leading global IT infrastructure event series, offering in-depth education alongside a showcase of emerging technologies in an independent, vendor-neutral environment. For 30 years, Interop has brought the IT community together to explore the latest in network infrastructure, encouraging collaboration, and interoperability. Through dynamic conference programs, Interop helps professionals at all career levels leverage the network, systems and applications that enable business innovation. The Interop Expo and InteropNet Demo Lab provide immersive, hands-on experiences, while connecting enterprise IT buyers with leading suppliers. Interop Las Vegas is the flagship event held each spring, with an annual event in Tokyo and Cloud Connect China in Shanghai. For more information, visit Interop is organized by UBM Americas, a part of UBM plc (UBM.L), an Events First marketing and communications services business. For more information, visit



O Captain! My Captain! The battle is won,

The votes are in, the bracket is done.

Though the waters were murky, now they’re clear as crystal,

Without a doubt, the ultimate captain wields a blaster pistol.

Introducing the Captain of the Millennium Falcon, Han Solo.

In twelve parsecs or less, he’ll get you where you need to go!


Han Solo was the captain that everyone loved to hate in this bracket battle.

Somehow he managed to stomp out the competition in every round, and ultimately earned the title of The Most Legendary Captain of All Time!




What are your final thoughts on this year’s bracket battle?


Do you have any bracket theme ideas for next year?


Tell us below!


The Actuator - April 13th

Posted by sqlrockstar Employee Apr 13, 2016

Welcome to the Actuator! I'm looking to make this a series of posts that provide links, musings, and snark from the series of tubes known as the Internet. If you don't know what an actuator is, you may be in the wrong place. You can show yourself out now.




This is what happens when you reply to spam email

Reminds me of a similar conversation I had with a Mr. Yan years ago. He was disappointed that I did not show up at that hotel in Lagos. Twice.


How to avoid brittle code

“If it hurts, do it more often.” That’s the same advice I would give my players back in my days of coaching basketball. Anytime you find yourself out of your comfort zone, find a way to make things more comfortable. Applying this concept to development just makes sense to me (now, of course).


Developers can run Bash Shell and user-mode Ubuntu™ Linux® binaries on Windows 10

Coming on the heels of the SQL Server® on Linux announcement, this is something else that is making adatole cry tears of joy. At least I think they are tears of joy. Hard to tell from here.


Thanks For Ruining Another Game Forever, Computers

Has it been 20 years since Deep Blue? Nice recap of how computers are slowly ruining games for humans, and how.


Passwords, 2FA, and Your “Digital Legacy”

I’ve started using some password management software recently, but didn’t think about the implications of 2FA and what it means should a family member need to access my accounts. This post is a good reminder about the extra steps needed.


United Airlines implements web security based upon surveys of their users that are infected with keystroke logging.

I wish I were making this up, but it’s a real conversation folks.


Hacker reveals $40 attack that steals police drones from 2km away

I’d like to propose that we rename IoT the “Internet of Unsecured Things” (iOUT). Maybe that way we can educate people about the nature of connected systems.


Microsoft® launches Bot Framework to let developers build their own chatbots

I’ve seen this movie before. It doesn’t end well for the humans. Then again, maybe it does:


Screen Shot 2016-03-30 at 1.44.58 PM.png

The incorrect use of personal devices or the inadvertent corruption of mission-critical data by a government employee can turn out to be more than simple accidents. These activities can escalate into threats that can result in national security concerns.


These types of accidents happen more frequently than one might expect — and they’ve got government IT professionals worried, because one of the biggest concern continues to be threats from within.


In last year's cybersecurity survey, my company SolarWinds discovered that administrators are especially cognizant of the potential for fellow colleagues to make havoc — inducing mistakes. Yes, it’s true: government technology professionals are just as concerned about the person next to them making a mistake as they are of an external Anonymous-style group or a rogue hacker.


So, what are agencies doing to tackle internal mistakes? Primarily, they’re bolstering federal security policies with their own security policies for end users. This involves gathering intelligence and providing information and training to employees about possible entry points for attacks.


While this is a good initial approach, it’s not nearly enough.


The issue is the sheer volume of devices and data that are creating the mistakes in the first place. Unauthorized and unsecure devices could be compromising the network at any given time, without users even realizing it. Phishing attacks, accidental deletion or modification of critical data, and more have all become much more likely to occur.


Any monitoring of potential security issues should include the use of technology that allows IT administrators to pinpoint threats as they arise, so they may be addressed immediately and without damage.


Thankfully, there are a variety of best practices and tools that address these concerns and nicely complement the policies and training already in place, including:


  • Monitoring connections and devices on the network and maintaining logs of user activity to track user activities.
  • Identifying what is or was on the network by monitoring network performance for anomalies, tracking devices, offering network configuration and change management, managing IT assets, and monitoring IP addresses.
  • Implementing tools identified as critical to preventing accidental insider threats, such as those for identity and access management, internal threat detection and intelligence, intrusion detection and prevention, SIEM or log management, and Network Admission Control.


Our survey respondents called out each of these tools as useful in preventing insider threats. Together and separately, they can assist in isolating and targeting network anomalies. They can help IT professionals correlate a problem directly to a particular user. The software, combined with the policies and training, can help administrators attack issue before it goes from simple mistake to “Houston, we have a problem.”


The fact is, data that’s accidentally lost can easily become data that’s intentionally stolen. As such, you can’t afford to ignore accidental threats, because even the smallest error can turn into a very large problem.


Find the full article on Defense Systems.


Interested in this year’s cyber security survey? Go here.

We have been watching the spread of ransomware and this malware's success with increasing concern.

Hospitals appear to be of particular interest this year.


And who hasn't had a friend or colleague call in a panic this year already.


As many of you know, most ransomware gets onto the system through a phishing attack, so Adobe's emergency update earlier this week was concerning on multiple levels.


1 - Does this mean we can expect ransomware drive-by-downloads

2 - What is the next bug in Flash that will be exploited.


If you haven't read about this update yet, you can hit any of arstechnica, macrumors and the of course the popular press.


This patch includes updates to prevent the Cerber form of ransomware and the fact that it is an emergency patch means it's been seen in the wild.

If you haven't already done so, please update flash it's windows and macOS.


And share your experiences, as we all know with ransomware - either you have a backup or you payup



Posted by kong.yang Employee Apr 8, 2016

The CIO’s SLA is their service level agreement to their organization’s CEO, COO, and CFO. Heck, it’s their agreement to all of their org’s C-levels. For IT professionals, the SLA represents their CIO’s goals and objectives. And in this case, the SLA acronym is so apropos.

  • S stands for secure. Try not to get breached.
  • L stands for lean. Maximize ROI and do more with less.
  • A stands for agile. Quickly pivot on anything (technology, services, and IT processes) that will bring benefits to business operations.


These goals have tremendous impact on IT professionals and their daily modus operandi. But don't take my word for it; the SolarWinds 2016 IT Trends Report shows that IT professionals recognize these directives as the goals for success as they try to succeed in the Hybrid IT paradigm. According to IT professionals who responded in the IT Trends Report, the top three barriers to greater cloud adoption by weighted rank are security and compliance, the need to support legacy systems, and budget limitations. Security and compliance is the S component. Budget limitations represent the L component. The need to support legacy systems speaks to tech debt and adds to tech inertia, which is the A component.


These challenges that IT pros have to overcome align exactly with the CIO’s SLA. Check out the rest of the results from the SolarWinds 2016 IT Trends Report to learn more.

Does the CIO’s SLA align with your directives? Do you agree with the results of the IT Trends Report? Please comment below.

Filter Blog

By date:
By tag: