Skip navigation
1 2 3 Previous Next

Geek Speak

2,104 posts
mbleib

The IT Training quandary

Posted by mbleib Apr 27, 2017

What do you do when your employer says no more training? What do you do when you know that your org should move to the cloud, or at least some discrete components? How do you stay current, and not stagnate? Can you do this within the org, or must you go outside to gain the skills you seek.

 

This is a huge quandary…

 

Or is it?

 

Not too long ago, I wrote about becoming stale in your skillsets, and how that becomes a career limiting scenario. The “Gotcha” in this situation is that often times, your employer doesn’t have the same emphasis on training as you do for your career. The employer may believe in getting you trained up, but you may feel as if that training is less than marketable or forward thinking. Or, even worse than that, the employer doesn’t feel that training is necessary. In their minds, you may be capable of doing the job you’ve been asked to do, and that the movement toward future technology is not mission critical. Or, for that matter, there’s no budget assigned for training.

 

These difficult scenarios are confusing, and difficult. How is one to deal with the disparity between what you want and what your employer wants?


The need for strategy in this case is truly critical. I don’t advocate the behavior of misleading the employer, but of course, we all have ourselves in mind, and what we can do to leverage our careers. Some people are satisfied with what they’re doing, and don’t long to keep their skills sharp, while others are like sharks, not living unless they’re moving forward. I consider myself to be among the latter group.

 

You can do your research into training that you’d be able to get for free. I know, for example, that Microsoft has much of its Azure training available online for no cost. Again, I don’t recommend boiling the ocean, but choosing what you do select strategically. Of course, knowing the course you wish to take, might force you to actually pay for the training you seek.

 

Certainly, a sandbox environment, or home-lab environment, wherein you can build up and tear down test platforms in which you provide yourself training in a self-training mode. Of course, getting certifications in that mode are somewhat difficult, as well as gaining access to the right tools to accomplish your training in the ways in which the vendor recommends.

 

I advocate doing research on a product category that would benefit the company in today’s environment, but can act as catalyst toward the movement to the cloud (should that be on the horizon. The most useful ramp in this case is likely Backup as a Service or DR as a service. So the research into new categories of backup, like Cohesity, Rubrik or Actifio where data management, location and data awareness are critical can assist the movement of the organization toward cloudy approaches. If you can effectively sell the benefits of your vision, then your star should rise in the eyes of management. Sometimes it may feel like you’re dragging the technology behind you, or that you’re pushing forward undesired tech toward your IT management, but the ethical fight of the good fight is well worth it. Often times, you can orchestrate a cost-free proof of concept on a product like these, to facilitate the research, and thus prove the benefit to the org, without significant outlay.

 

In this way, you can guide your organization toward the technologies that are most beneficial to them both solving today’s issues, and toward the end goal of a forward-thinking strategy. Some organizations are simply not conducive to this approach, which leads me to my next point.

 

Sometimes, the only way in which you can better your skills, or improve your salary/stature is without the relationship in your current organization. This is a very dynamic field, and movement from vendor to end customer to channel partner, has proven a fluid stream. If you find that you’re just not getting satisfaction for some reason within your IT org, you really should consider whether moving on is the right approach. This draconian approach is one that should be approached with caution, as the appearance of hopping from gig to gig can potentially be viewed by an employer as a negative. However, there are times when the only way to move upward is to move onward.

sqlrockstar

The Actuator - April 26th

Posted by sqlrockstar Employee Apr 26, 2017

Heading to Salt Lake City this week for the SWUG meeting on Thursday. If you are in the area I hope you have the chance to stop by and say hello. I'll be there to talk data and databases and hand out some goodies. I haven't been to Salt Lake City in a few years, and I'm looking forward to being there again even if there is a chance of snow in the forecast.

 

As always, here's a handful of links from the intertubz I thought you might find interesting. Enjoy!

 

Steve Ballmer Serves Up a Fascinating Data Trove

As someone who loves data, I find this data project to be the best thing since Charles Nelson Minard. Head over to https://usafacts.org/ and get started on finding answers to questions you didn't know you wanted to ask.

 

The New York Times to Replace Data Centers with Google Cloud, AWS

Eventually, we will hit a point where *not* having your data hosted by a cloud provider will make headlines.

 

Do You Want to be Judged on Intentions or Results?

Short but thought-provoking post. I learned a long time ago that no one cares about effort, they only care about results. But I didn't stop think about how I want to be judged, or how I could control the conversation.

 

Cybersecurity Startup Exposed Hospital Network Data in Demos

Whoops. I'm starting to understand why they didn't earn that contract.

 

Microsoft is Bringing AI and More To SQL Server 2017

In case you missed it, last week Microsoft announced the new features coming in SQL Server 2017. It would appear that Microsoft sees the future of data computing to include features that go beyond just traditional storage.

 

Windows and Office align feature release schedules to benefit customers

In another announcement, Microsoft announced fewer updates to their products. But what they are really announcing is the transition from traditional client software to subscription-based software for their core products such as Office and Windows.

 

Uber tried to fool Apple and got caught

If you were looking for another reason to dislike how Uber operates as a company, this is the link for you.

 

Took the family for a drive to the Berkshires last Friday and realized that my debugging skills are needed everywhere I go:

IMG_6638.JPG

Hybrid IT continues to grow as more agencies embrace the cloud, so I wanted to share this blog written last year by our former Chief Information Officer, Joel Dolisy, which I think is still very valid and worth a read.

 

 

Most federal IT professionals acknowledge that the cloud is and will be a driving component behind their agencies’ long-term successes; no one expects to move all of their IT infrastructures to the cloud.

 

Because of regulations and security concerns, many administrators feel it’s best to keep some level of control over their data and applications. They like the efficiencies that the cloud brings, but they aren’t convinced that it’s suitable for everything.

 

Hybrid IT environments offer benefits, but they can also introduce greater complexity and management challenges. Teams from different disciplines must come together to manage various aspects of in-house and cloud-based solutions. Managers must develop special skillsets that go well beyond traditional IT, and new tools must be deployed to closely monitor this complex environment.

 

Here are a few strategies managers can implement to close the gap between the old and the new:

 

1. Use tools to gain greater visibility

Administrators should deploy tools that supply single access points to metrics, alerts, and other data collected from applications and workloads, allowing IT staff to remediate, troubleshoot, and optimize applications, regardless of where they may reside.

 

2. Use a micro-service architecture and automation

Hybrid IT models will require agencies to become more lean, agile, and cost effective. Traditional barriers to consumption must be overcome and administrators should gain a better understanding of APIs, distributed systems, and overall IT architectures.

 

Administrators must also prepare to automatically scale, move, and remediate services.

 

3. Make monitoring a core discipline

Maintaining a holistic view of your entire infrastructure allows IT staff to react quickly to potential issues, enabling a more proactive strategy.

 

4. Remember that application migration is just the first step

Migration is important, but the management following initial move might be even more critical. Managers must have a core understanding of an application’s key events and performance metrics and be prepared to remediate and troubleshoot issues.

 

5. Get used to working with distributed architectures

Managers must become accustomed to working with various providers handling remediation as a result of outages or other issues. The result is less control, but greater agility, scalability, and flexibility.

 

6. Develop key technical skills and knowledge

Agency IT professionals need to learn service-oriented architectures, automation, vendor management, application migration, distributed architectures, API and hybrid IT monitoring, and more.

 

7. Adopt DevOps to deliver better service

DevOps breaks down barriers between teams, allowing them to pool resources to solve problems and deliver updates and changes faster and more efficiently. This makes IT services more agile and scalable.

 

8. Brush up on business skills

Administrators will need to hone their business-savvy sides. They must know how to negotiate contracts, become better project managers, and establish the technical expertise necessary to understand and manage various cloud services.

 

Managing hybrid IT environments takes managers outside their comfort zones. They must commit to learning and honing new skills, and use the monitoring and analytics tools at their disposal. It’s a great deal to ask, but it’s the best path forward for those who want to create a strong bridge between the old and the new.

 

Find the full article on Government Computer News.

The other day, as is often the case when an engineer is deep in a troubleshooting task that requires a restart if interrupted, I got a request for advice. “Hey, if you have a second I wanted to ask a question about standardizing DevOps tools. Should my friend use Chef, Puppet, or something else to get DevOps going?” He couldn't understand task-switch loss, so I did my best impression of the wisest help desk gurus on THWACK. I took a breath, found a smile, and answered the question.

 

“Standardizing” the tools of DevOps is anathema to the goal of DevOps; a bad habit carried over from old-school, under-resourced IT. With waterfall-based, help desk interrupt-driven, top-down IT, there’s too often a belief that if only the organization would adopt a Magic Tool, all would be well. DevOps, and more correctly the Agile principals that beget DevOps as an outcome, is bigger than a tool, vendor, or any technology.

 

For an organization to successfully make DevOps work, especially to achieve its promise of breaking logjams blocking the digital transformation enterprise so desperately wants, standardizing Agile principles and methods should be the real goal. For example, if the Ops team adopts Scrum and comes to value ScrumMasters who resist corrupting urges to grow scope after sprints begin, it doesn’t matter if the team chooses Ansible, Chef, Puppet, or AWS or Azure services for automation.

 

If critical teams standardize on methods that result in predictable, quality outcomes, they can each choose the tools that work best for them, or change them as needed to take advantage of new features. Tools selection or replacement becomes just one more element in the product backlog, to be balanced against business goals, like everything else. It substantially reduces the tendency to paralyze the team waiting on The Penultimate Tool of the Ages, before it can even get started.

 

Where standardizing DevOps methods over tools really pays off is assured quality. If a team standardizes on a principle of Minimal Acceptable Monitoring, they will ensure throughout Dev, testing, deployment, and Ops that the right tools are used to quantify performance and user experience. This crucial measure of service quality then informs sprint goals, increasing quality and even efficiency over time. Even better, by adopting effective, (verses impossibly idealized), Continuous Deployment, IT can help ensure DevOps, in which often overlooked goals like security awareness are always a part of every change rather than an occasional review project.

 

If your aspiring pre-DevOps organization makes only one decision on a standard, make it this: everyone learns the 10 key principles of AgileNote: I’m not saying anything about Scrum, Chef, stand-up meetings, or the Kanban board that runs my house chores. I’m not talking about actual adoption, timelines, project division, team realignment, or even two-pizza teams. The point is not to be prescriptive in the early stages. Resist the IT urge to go immediately to implementation, resolution, and ticket closure.  Let.. this.. soak.. in.  Dream about how you’d build IT if you could start over without any existing technology or processes. How would you solve the macro requirement: How do I delight humans who use technology? And a final pro tip: Standardization on Agile principles is best done over several sessions as a team, offsite, with adult beverages to go with the pizza.

Has this situation happened to you? You've dedicated your professional career -- and let's be honest -- your life, on a subject, only to find “that's not good enough.” Maybe it comes from having too many irons in the fire, or it could be that there are just too many fires to be chasing.

 

Ericsson (1990) says that it takes 10,000 hours (20 hours for 50 weeks a year for ten years = 10,000) of deliberate practice to become an expert in almost anything.

 

I’m sure you’ve heard that Ericsson figure before, but in any normal field, the expectation is that you will gain and garner that expertise over the course of 10 years. How many of you can attest to spending 20 hours a day for multiple days to even multiple weeks in a row as you tackle whatever catastrophe the business demands, often driven by a lack of planning on their part? (Apparently, a lack of planning IS our emergency when it comes to keeping that paycheck coming in!)

 

I got my start way back in Security and Development (the latter of which I won’t admit if you ask me to code anything :)). As time progressed, the basic underpinnings of security began delving into other spaces. The message became, “If you want to do ANYTHING in security, you need networking skills or you won’t get very far.” To understand the systems you’re working on, you have to have a firm grasp of the underlying Operating Systems and kernels. But if you’re doing that, you better understand the applications. Oh, and in the late 1990s, VMware came out, which made performing most of this significantly easier and more scalable. Meanwhile, understanding what and how people do the things they do only made sense if you understood System Operations. And nearly every task along the way wasn’t a casual few hours here or there, especially if your goal was to immerse yourself in something to truly understand it. Doing so would quickly become a way of life, and before long you'd quickly find yourself striving for and achieving expertise in far too many areas, updating your skill sets along the way.

 

As my career moved on, I found there to be far more overlap of specializations and subject matter expertise, rather than clearly delineated silos. Where this would come to head as a strong positive was when I worked with organizations as a SME in storage, virtualization, networking and security, finding that the larger the organization, the more these groups would refuse to talk to each other. More specifically, if there was a problem, the normal workflow or blame assignment would look something like this picture. Feel free to provide your own version of events that you experience.

 

 

Given this very atypical approach to support by finger-pointing, having expertise in multiple domains would become a strong asset since security people will only talk to other security people. Okay, not always, but also, yes, very much always. And if you understand what they’re saying and where they’re coming from, pointing out, “Hey, do you have a firewall here?” means a lot more coming from someone who understands policy than from one of the other silos, which they seemingly have nothing but disdain for. Often, a simple network question posed by one network person to another could move mountains, because each party respects the ability or premise of the other. Storage and virtualization folks typically take the brunt of the damage because they regularly have to prove that problems aren’t their fault because they’re the easiest point of blame due to storage pool consolidation or hardware pool consolidation. Finally, the application guys simply won’t talk to us half the time, let alone mention that they made countless changes without understanding what WE did wrong to make their application suddenly stop working the way it should. (Spoiler alert: It was an application problem.)

 

Have you found yourself pursuing one or more subject matter domains of expertise, either just get your job done, or to navigate the shark-infested waters of office politics? Share your stories!

Raise your hand if you have witnessed firsthand rogue or shadow IT. This is when biz, dev, or marketing goes directly to cloud service providers for infrastructure services instead of going through your IT organization. Let's call this Rogue Wars.

 

Recently, I was talking to a friend in the industry about just such a situation. They were frustrated with non-IT teams, especially marketing and web operations, procuring services from other people’s servers. These rogue operators were accessing public cloud service providers to obtain infrastructure services for their mobile and web app development teams. My friend's biggest complaint was that his team was still responsible for supporting all aspect of ops, including performance optimization, troubleshooting, and remediation, even though they had zero purviews or access into the rogue IT services.

 

They were challenged by the cloud’s promise of simplified self-service. The fact that it's readily available, agile, and scalable was killing them softly with complexities that their IT processes were ill prepared for. For example, the non-IT teams did not leverage proper protocol to retire those self-service virtual machines (VMs) and infrastructure resources that form the application stack.That meant that they were paying for resources that no longer did work for the organization. Tickets were also being opened for slow application performance, but the IT teams had zero visibility to the public cloud resources. For this reason, they could only let the developers know that the issue was not within the purview of internal IT. Unfortunately, they were handed the responsibility of resolving the performance issue.

 

This is how the easy button of cloud services is making IT organizations feel the complex burn. Please share your stories of rogue/shadow IT in the comments below. How did you overcome it, or are you still cleaning up the mess?

sqlrockstar

The Actuator - April 19th

Posted by sqlrockstar Employee Apr 19, 2017

I missed a milestone announcement! The Actuator celebrated its one-year anniversary last week! I created this series as a fun way to keep in touch with everyone here on THWACK on a weekly basis. I never thought I would manage to keep it going, every week, for a full year. Thanks to everyone for their support. Here's to another year of mindless links!

 

In unrelated news, I'm putting the finishing touches on my slides for the Salt Lake City SWUG next week. I hope you get the chance to stop by and say hello.

 

As always, here's a handful of links from the intertubz I thought you might find interesting. Enjoy!

 

Printed titanium parts expected to save millions in Boeing Dreamliner costs

Not sure how I feel about flying in an airplane that was made on a 3D printer. I'm also wondering what this means for the future of manufacturing.

 

A New, More Rigorous Study Confirms: The More You Use Facebook, the Worse You Feel

Pretty sure Zuckerburg is going to unfriend Harvard after reading this article.

 

Microsoft Says Users Are Protected From Alleged NSA Malware

This seems to be a common trend lately: Hackers release information and companies respond by saying that if everything is okay if you are up to date with patches. I can't help but feel we are pawns in a much larger game.

 

Architecting Microsoft SQL Server on VMware

For all the people that have ever told me "virtualizing database servers is hard," I present to you the upgraded guidelines from VMWare. SPOILER ALERT: It's not that hard to virtualize your database workloads.

 

Nintendo Discontinues the NES Classic Edition

Because it was too popular, apparently.

 

Early Macintosh Emulation Comes to the Archive

Since we are walking down memory lane with the NES, here's a look at what Macintosh used to look like.

 

Emoji for fun and profit

This video with the CTO of Slack runs 30 minutes, but for anyone that has wondered about the history of emoji, it's worth a view. I can't be the only one, right?

 

As the kids scoured my yard for eggs yesterday afternoon I found myself thinking this exact thought:

easter-egg.jpg

By Joe Kim, SolarWinds Chief Technology Officer

 

It can be truly astounding to think about the scale of today’s largest government networks, which are growing larger and more complex every day.

 

As a public sector IT pro, it may seem like an impossible challenge to manage this growing behemoth. Ever-increasing numbers of network devices, servers, and applications give you less leeway for downtime, hiccups, or problems of any sort.

 

There is a range of strategies that government IT pros can employ to support network growth and scalability while helping to ensure that all architectural and infrastructural requirements are met, and system failover scenarios are accounted for.

 

As the IT environment expands, it becomes more important for monitoring and management systems to scale to keep up with growth. Most monitoring systems are built with the following elements, each with its own requirements and challenges to scale:

 

  • A server that hosts the monitoring product and polls for status and performance
  • A database where the polled information is stored for historical data access and reporting
  • A web console for software management, data visualization, and reporting

 

Within this environment, three primary variables will affect a system’s scalability:

 

  1. Infrastructure size: The number of monitored elements (where an element is defined as a single, identifiable node, interface, or volume), or the number of servers and applications that can be monitored.
  2. Polling frequency: The interval at which the monitoring system polls for information. For example, statistics collected every few seconds instead of every minute will make the system work harder, and requirements will increase.
  3. The number of simultaneous users accessing the monitoring system.

 

Those are the basics of understanding the feasibility of scalability. Now, let’s move on to ways to manage that environment.

 

A command center is particularly well suited to agencies with multiple regions or sites where the number of nodes to be monitored in each region would warrant both localized data collection and storage. It works well for regional teams that are responsible for their own environments and require autonomy over their monitoring platform. While the systems are segregated between regions, all data can still be accessed from the centrally located console.

 

Additional scalability tips

 

There are several additional strategies that will help manage an agency’s growing infrastructure:

 

Add polling engines: Distributing the polling load for the monitoring system among multiple servers will provide scalability for large networks.

 

Add web servers: Additional web servers can help support increasing numbers of concurrent monitoring sessions, helping to ensure that more users have uninterrupted web access to network monitoring software.

 

Add a failover server: To help ensure the monitoring system is always available, install a failover mechanism that will switch monitoring system operation to a secondary server if the primary server should fail.

 

Agency networks will certainly get large. It's the nature of an increasingly technically driven government. While it may seem overwhelming, implementing these few tactics will help IT managers embrace the growth and ultimately realize its value.

 

Find the full article on Government Computer News.

geeks.jpg

 

Recently, a rare moment of alignment occurred here at SolarWinds. All the Head Geeks came home to roost, brainstorm, debate, break bread together, and simply bask in the warm glow of friendship and camaraderie.

 

But the event, which only happens between two and three times a year due to our speaking and convention appearances, caused some confusion among the rest of the staff: They didn't know what to call it. Were we a gaggle of Geeks? A herd? A NERD herd?

 

Other collections have interesting names. There's a murder of crows, a conspiracy of ravens, an ostentation of peacocks, and exaltation of larks. There's a troop of baboons and a shrewdness of apes. A parade of elephants and a bloat of hippopotami.

 

Even among humans, we have some interesting group names: a blush of boys, a hastiness of cooks, or a superfluity of nuns.

 

So, I thought I would put it out to the ever creative THWACKizens:

 

What do YOU call a collection, gathering, grouping of Geeks? Are we:

  • A convention?
  • An argument or quibble?
  • An array or hash?
  • Or maybe a chaos or grok.

 

Share YOUR ideas in the comments below!

sqlrockstar

The Actuator - April 12th

Posted by sqlrockstar Employee Apr 12, 2017

Back from Telford and SQLBits and looking forward to some time at home before I head off to the SWUG meeting in Salt Lake City in two weeks. If you can attend I do hope you stop by and say hello. I'd love the opportunity to talk data and databases!

 

As always, here's a handful of links from the intertubz I thought you might find interesting. Enjoy!

 

Why Uber Won't Fire Its CEO

If people really want the CEO gone, they need to find a way to get the stock price low enough. Until people start losing money, the bad behavior will continue.

 

Google adds fact-check findings to search and news results

A step in the right direction, but it won't overcome the real issue: there is a dire lack of trusted fact-checkers in the world. As such, we have no idea what information can be trusted.

 

Hackers set off Dallas' 156 emergency sirens over a dozen times

I suspect we will see more of these stories over the next 18 months, as state and local government agencies aren't at the top of the "tech-savvy" list, mostly due to inadequate funding.

 

Microsoft Snatches Up Deis To Boost Azure Kubernetes Tech

Microsoft grabs Deis right after Docker announces plans to offer Enterprise containers. These items may be related.

 

SQL Server Privacy Statement

SQL Server became the first Microsoft product to publish detailed information about the what and how data is collected for usage feedback. This is a huge step forward and sets the bar high not only for other product teams but for other database platforms in the industry.

 

Can My Internet Provider Really Sell My Data? How Can I Protect Myself?

Yes, they always have been able to do this. Of course, there are ways for you to minimize the risk here, but at the end of the day, you have to place your trust with someone, whether that be your ISP or with a company that provides a VPN.

 

It's that time of year, Easter eggs season! Here's one my daughter made a few years back, I'm not sure it looks anything like me:

 

sql-egg.JPG

Thirty-three entered but only one could win.

If you made our winner mad, you may lose a limb.

When you’re in a galaxy far far away, it’s dangerous to fly solo—

be sure to travel with a Wookie who knows how to use a crossbow.

And when the Death Star is about to go kablooey,

the one you want by your side is the one we affectionately call, Chewie.

 

Chewbacca won this bracket battle with one hairy arm tied behind his back.

Round after round he won in a landslide, and ultimately earned the title of The Greatest Sidekick of All Time!

 

We interviewed Chewbacca after we crowned him as the winner. He excitedly exclaimed, “Uuuuuuuuuur Ahhhhhrrrrrr Uhrrrr Ahhhhhhrrrrrrr Aaaaarhg!”

I think that really says it all folks.

 

1702_thwack_BracketBattle-Sidekicks_round-FINAL.png

 

 

What are your final thoughts on this year’s bracket battle?

 

Do you have any bracket theme ideas for next year?

 

Tell us below!

By Joe Kim, SolarWinds Chief Technology Officer

 

At our recent user group meeting in Washington, D.C., I had conversations with some of our Army customers, which reminded me of a blog written last year by our former Chief Information Officer, Joel Dolisy. It is exciting that our software can support their mission.

 

No one needs reliable connectivity more than the nation’s armed forces, especially during the heat of battle. But reliable connectivity often can be hampered by a hidden enemy: latency and bandwidth concerns.

 

The military heavily relies on Voice over Internet Protocol (VoIP) for calls, web conferencing, high-definition video sharing, and other bandwidth-heavy applications. While this might sound more like the communication tool for a business boardroom, it is equally applicable within the military, and compromised systems come with potentially life-altering consequences.

 

Use of these highly necessary communication tools can dramatically strain even the most robust communications systems. As such, Defense Department IT managers must leverage their efficacy while helping to ensure that colleagues remain in constant contact with minimal lag, stuttering, or disconnection.

 

How can IT administrators and those in the field make sure their networks supply crucial connectivity to meet the needs of soldiers and commanders? How can they guarantee reliable data and communications anytime, anywhere?

 

We can look to the U.S. Army, which successfully deployed software and solutions that meet today’s essential need to remain connected. The Army’s recruiting message is “Army Strong.” Attaining that strength requires troops be battle-ready the world over. Today, that also means maintaining uninterrupted communications and deploying technology that can monitor, analyze, and reduce—if not eliminate—network outages.

 

The Army developed Warfighter Information Network-Tactical (WIN-T) to provide secure and reliable communications of all forms—voice, video, and data—from any location, at any time. The network is built on software that helps the Army manage networks in a number of ways, yet creates seamless and smooth communications.

 

Network bandwidth analysis solutions help to ensure optimal network performance for even the most bandwidth-intense applications. Performance monitors identify potential performance issues that managers can rectify quickly before the problems interfere with communications or cause outages. Configuration management lets Army IT personnel easily manage multiple configurations for different routers and switches to help to ensure communication remains unimpeded. Traffic analysis maintains optimal network and bandwidth usage. The Army has also deployed solutions to monitor the quality of VoIP calls and WIN-T wide area network (WAN) performance, and troubleshoot issues through access to detailed call records.

 

A Communications Blueprint

 

In short, the organization is working hard to ensure that warfighters and others securely remain in touch at all times. After all, the heat of battle is when the need for communications is greatest, and helping to ensure that troops stay connected from the front lines and the command post is an absolute must.

 

Through WIN-T and supporting technologies, the Army laid out a communications blueprint that other defense organizations can—and should—emulate. It has created a solution that enables soldiers to maintain constant voice, video, and data communications, even when in remote and challenging spots and while on the move. It’s an impressive feat very much essential to today’s always-on warfighter.

 

Find the full article on Signal.

2009-3237_010.jpg

On April 4, Seth Godin -- the writer I aspire to be like -- wrote, "The Invisible Fence" (Seth's Blog: The invisible fence ). In his usual eloquent yet terse style, he said:

"There are very few fences that can stop a determined person (or dog, for that matter).

Most of the time, the fence is merely a visual reminder that we're rewarded for complying.

If you care enough, ignore the fence. It's mostly in your head."

 

It caught my eye because once upon a time I looked into getting an Invisible Fence for my dog, pictured above. Also pictured above is my son, and to say the two were thick as thieves is an understatement. Aside from when he was at school, they went everywhere together. The boy thought the dog was his responsibility, at least that's what we'd told him. But our dog knew better. The boy was her human, a responsibility she took very seriously.

 

Which is why the Invisible Fence rep stood in my driveway, looked over at the dog and her human, and told me not to bother. "Dog's like that," he informed me. "they guard their flock no matter what. If she hears him and decides she needs to be there, a 10-foot brick wall won't stop her, let alone a shock collar, no matter how high you turn it up. What it will do, though, is make her think twice about coming back."

 

Years later, with the dog laid to rest and her human almost grown, that comment has stuck with me.

 

How often, I'm left wondering, do we build fences?

Fences around our work.

Fences around our teams.

Fences around our interactions.

Fences around our relationships.

Fences around our heart.

 

Fences, which, as Seth writes, are mostly in our head.

 

And, like the salesman told me that day, fences that do nothing to keep others locked inside artificial boundaries, but do an amazing job of keeping them from coming back once they are free.

SolarWinds recently released the 2017 IT Trends Report: Portrait of a Hybrid IT Organization, which highlights the current trends in IT from the perspective of IT professionals. The full details of the report, as well as recommendations for hybrid IT success, can be found at it-trends.solarwinds.com.

 

The findings are based on a survey fielded in December 2016. It yielded responses from 205 IT practitioners, managers, and directors in the U.S. and Canada from public and private-sector small, mid-size, and enterprise companies that leverage cloud-based services for at least some of their IT infrastructure. The results of the survey illustrate what a modern hybrid IT organization looks like, and shows cost benefits of the cloud, as well as the struggle to balance shifting job and skill dynamics. 

 

The following are some key takeaways from the 2017 IT Trends Report:

  1. Moving more applications, storage, and databases into the cloud.
  2. Experiencing the cost efficiencies of the cloud.
  3. Building and expanding cloud roles and skill sets for IT professionals.
  4. Increasing complexity and lacking visibility across the entire hybrid IT infrastructure.

Cloud and hybrid IT are a reality for many organizations today. They have created a new era of work that is more global, interconnected, and flexible than ever. At the same time, the benefits of hybrid IT introduce greater complexity and technology abstraction. IT professionals are tasked with devising new and creative methods to monitor and manage these services, as well as prepare their organizations and themselves for continued technology advancements.

 

Are these consistent with your organizational directives and environment? Share your thoughts in the comment section below.

1702_thwack_BracketBattle-Sidekicks_Landing-Page_525x133_v.1.png

 

For the first time in bracket battle 2017, I don’t think there were any surprises this round. The final four included two of the most popular bracket contestants we’ve ever seen—Groot & Chewbacca. They have continued to dominate each round and made getting into the finals look easy. Several of you predicted this final match-up on DAY 1 of the bracket battle! Team Watson & Team Pinky, you put up a good fight.

 

To see the final score for these match-ups, check out the polls below:

 

tomaddox

 

smoked_angus

 

For the last time this year, it’s time to check out the updated bracket and vote for the greatest sidekick of all time! This one is for all the marbles!

You will have until April 9th @ 11:59 PM CDT to submit your votes & campaign for your favorite sidekick.

 

Access the bracket and make your picks HERE>>

Filter Blog

By date:
By tag: