Skip navigation
1 2 3 4 Previous Next

Geek Speak

1,900 posts

If you’re not prepared for the future of networking, you’re already behind.

 

That may sound harsh, but it’s true. Given the speed at which technology evolves compared to the rate most of us typically evolve in terms of our skillsets, there’s no time to waste in preparing ourselves to manage and monitor the networks of tomorrow. Yes, this is a bit of a daunting proposition considering the fact that some of us are still trying to catch up with today’s essentials of network monitoring and management, but the reality is that they’re not really mutually exclusive, are they?

 

In part one of this series, I outlined how the networks of today have evolved, and what today’s new essentials of network monitoring and management are as a consequence.

 

Before delving into what the next generation of network monitoring and management will look like, it’s important to first explore what the next generation of networking will look like.

 

On the Horizon

 

Above all else, one thing is for certain: We networking professionals should expect tomorrow’s technology to create more complex networks resulting in even more complex problems to solve.

 

Networks growing in all directions

 

Regardless of your agency’s position, the explosion of IoT, BYOD, BYOA and BYO-everything is upon us. With this trend still in its infancy, the future of connected devices and applications will be not only about the quantity of connected devices, but also the quality of their connections tunneling network bandwidth.

 

Agencies are using, or at least planning to use, IoT devices, and this explosion of devices that consume or produce data will, not might, create a potentially disruptive explosion in bandwidth consumption, security concerns and monitoring and management requirements.

 

IPv6 eventually takes the stage…or sooner (as in now!)

 

Recently, ARIN was unable to fulfill a request for IPv4 addresses because the request was greater than the contiguous blocks available. IPv6 is a reality today. There is an inevitable and quickly approaching moment when switching over will no longer be an option, but a requirement.

 

SDN and NFV will become the mainstream

 

Software defined networking (SDN) and network function virtualization (NFV) are expected to become mainstream in the next five to seven years; okay, maybe a bit longer for our public sector friends. With SDN and virtualization creating new opportunities for hybrid infrastructure, a serious look at adoption of these technologies is becoming more and more important.

 

So long WAN Optimization, Hello ISPs

 

Bandwidth increases are outpacing CPU and custom hardware’s ability to perform deep inspection and optimization, and ISPs are helping to circumvent the cost and complexities associated with WAN accelerators. WAN optimization will only see the light of tomorrow in unique use cases where the rewards outweigh the risks.

 

Farewell L4 Firewalling

 

Firewalls incapable of performing deep packet analysis and understanding the nature of the traffic at the Layer 7 (L7), or the application layer, will not satisfy the level of granularity and flexibility that most network administrators should offer their users. On this front, change is clearly inevitable for us network professionals, whether it means added network complexity and adapting to new infrastructures or simply letting withering technologies go.

 

Preparing to Manage the Networks of Tomorrow 

 

So, what can we do to prepare to monitor and manage the networks of tomorrow? Consider the following:

 

Understand the “who, what, why and where” of IoT, BYOD and BYOA

 

Connected devices cannot be ignored. According to 451 Research, mobile Internet of Things (IoT) and Machine-to-Machine (M2M) connections will increase to 908 million in just five years. This staggering statistic should prompt you to start creating a plan of action on how you will manage these devices.

 

Your strategy can either aim to manage these devices within the network or set an organizational policy to regulate traffic altogether. Curbing all of tomorrow’s BYOD/BYOA is nearly impossible. As such, you will have to understand your network device traffic in incremental metrics in order to optimize and secure them. Even more so, you will need to understand network segments that aren’t even in your direct control, like the tablets, phablets and Fitbits, to properly isolate issues.

 

Know the ins and outs of the new mainstream

 

As stated earlier, SDN, NFV and IPv6 will become the new mainstream. We can start preparing for these technologies’ future takeovers by taking a hybrid approach to our infrastructures today. This will put us ahead of the game.

 

Start comparison shopping now

 

Evaluating virtualized network options and other on-the-horizon technologies will help you nail down your agency’s particular requirements. Sometimes, knowing a vendor has or works with technology you don’t need right now but might later can and should influence your decisions.

 

Brick in, brick out

 

Taking on new technologies can feel overwhelming. Look for ways that potential new additions will not just enhance, but replace the old guard. If you don’t do this, then the new technology will indeed simply seem to increase workload and do little else. This is also a great measuring stick to identify new technologies whose time may not yet have truly come for your organization.

 

To conclude this series, my opening statement from part one merits repeating: learn from the past, live in the present and prepare for the future. The evolution of networking waits for no one. Don’t be left behind.

 

Find the full article on Federal Technology Insider.

September has now become a series of IT industry events. From VMware VMworld to SolarWinds THWACKcamp to Oracle OpenWorld to Microsoft Ignite, it seems like an endless procession of speaking sessions, in booth demo presentations and conversations with IT professionals in those communities.  That last aspect that is my favorite aspect of industry events. The work we do needs to have meaning and the people interaction is my fuel for that meaningful fire. Technologies, people, and processes will always change. Similarly, the desire to learn, evolve and move forward remains the constant for successful integration into any new paradigm. Be constant in your evolution.

 

Here's a brief recap of occurrences at recent events that I had the great privilege of attending and participating in:

 

VMworld 2016

SolarWinds Booth Staff at VMworld 2016sqlrockstar kong.yang at VMware vExpert Party Mob Museum
chrispaap before he rocked the booth with his Scaling Out Your Virtual Infrastructure
VMworld2016-booth.PNGHeadGeeks-vExpert-MobMuseum.pngCPaap-VMworld.png

 

 

THWACKcamp 2016

Head Geeks with their Executive Leader jennebarbour
Radioteacher kong.yang DanielleH photo bombed by KMSigmasqlrockstar - it's make-up time #ChallengeAccepted w/ hcavender. Peace out brother :-)
HeadGeeks THWACKcamp 2016.jpgTHWACKcamp - Community.pngTHWACKcamp - putting lipstick on a pig.JPG


Coming soon to an IT event near you: IT Pro Day, Microsoft Ignite in HAWT-lanta, Chicago SWUG, and AWS re:Invent in Las Vegas. Stay thirsty for IT knowledge and truths my friends! Let me know if you'll be at any of these events, always happy to connect with THWACK community members and converse the IT day away.

nannas_ring2.jpg

We just spent two days wrestling with this years’ THWACKcamp theme, and I think we’ve all come away much richer for the discussions held, the information shared, and the knowledge imparted.

 

And as I sit here in the airport lounge, tired but exhilarated and energized, an old post appeared on FB feed. It was written and posted by a friend of mine a few years ago, but came back up through serendipity, today of all days.

 

It tells the story of why this amazing woman who's been my friend since 7th grade chose the sciences as her life's path. And it starts with with a ring - one which was given 50 years late, but given never the less. And why the ring was actually secondary after all.

 

We lived near each other, played flute one chair apart in band, shared an interest in all things geeky including comic books and D&D, and when she got her license in high school we carpooled to HS together because we both suffered from needing to be “in place” FAR too early in the morning. Many a sleepy morning was spent sitting outside the band room, where I would test her on whatever chemistry quiz was upcoming. Even then her aspiration to be a a biologist was clear and firm, and she was driven to get every answer not only correct, but DOWN. Down pat. To this day whenever I call, I ask her the atomic weight of germanium (72.64, if you are curious).

 

She graduated a year before me, was accepted to the college of her choice, and from there easily attained all of her goals.

 

No small credit for this goes to the two women in her life: “Nanna” – her grandmother, who you can read about below; and her mother, a gifted chemical engineer with a long and illustrious career at BP, who was herself also inspired by Nanna.

 

When we talk about the energy behind the “challenge accepted” theme, this story really drives home a powerful set of lessons:

 

  1. The lesson that the challenges accepted by others have paved the way for us. They are a very real and tangible gift.
  2. The reminder that our willingness to face challenges today has the potential to impact far more than we realize: more than our day; more than our yearly bonus; more even than our career.

 

In simply getting up, facing the day, and proclaiming (whether in a bold roar to the heavens or a determined whisper to ourselves) “Challenge Accepted” we have the opportunity to light the way for generations to come.

 

************************

Nanna's Ring

It is a simple steel band. No engravings, nothing remarkable. It has always been on her right hand pinkie finger since she got it.

 

It was May in the summer of 1988. I had graduated with my bachelor degree in biology and was getting ready to start the Master's program at the University of Dayton. Nanna needed to get to Ada, Ohio and I needed to drop some stuff down at Dayton. We made a girl's weekend trip around Ohio.

 

We talked about traveling to college and how in her day it was all back roads; the interstate system that I was driving had not come about. I was speeding (young and in a hurry to go do things) and Nanna said, "Go faster." I made it from Eastside Cleveland to Dayton in record time. Dropped off the stuff with my friends and back on the road!

 

Made it to Ohio Northern in time to grab dinner in the dorm hall cafeteria, wander around a little bit until Nanna's knees had enough of that, then we settled into the dorm room for the night. I got the top bunk, she took the bottom. We talked and giggled like freshmen girls spending their first night at college.

 

The next morning we got dressed up, grabbed breakfast, then made our way to the lecture hall. The room was packed with kids my age and professors. The ceremony began. It was the honor society for engineers and the soon-to-graduate engineers were being honored. One by one, the new engineers were called to the front. Last of all the head of the society called out, "Jane Cedarquist!" Nanna smiled and, with a little more spring in her achy knees, went to the front of the hall.

 

"About 50 years ago, this young lady graduated with a degree in engineering. She was the first lady to do so from our college so we honor her today - an honor overdue." She got a standing ovation and a number of the young engineers that stood with her gave her hugs and shook her hands. After the quiet returned, the engineering students gave their pledge and received their rings of steel and placed them on their right hand pinkie fingers.

 

Jane Cedarquist went out into the world as an engineer and managed to survive trials and tribulations of being a woman in a man's world. Eventually she meet Dick Harris and they married and had two kids. She stayed home because that's how things worked. Eventually her kids grew up and had kids of their own. She never got back to engineering, though some of the landscape projects and quilts she made had the obvious stamp of an engineer's handywork. She traveled around the world and marvelled at the wonders, both man-made and God-given. She loved jewlery and always had her earrings, necklace, bracelets, and rings on her person. All of them had meaning and value. Some pieces would come and go, but after that day in 1988 she was never seen without that ring of steel.

 

It is a simple steel band. No engravings, nothing remarkable. It has always been on her right hand pinkie finger since she got it - until now.

 

(Leon's footnote: Jane Cedarquist Harris passed away, and passed her ring to my friend. She wears it - on a chain, since she understands the gravity of the pledge her Nanna made - carrying the legacy forward both professionally and symbolically).

stencil.linkedin-post (1).jpg

In my previous posts, I shared my tips on being an Accidental DBA - what things you should focus on first and how to prioritize your tasks.  Today at 1PM CDT, Thomas LaRock, HeadGeek and Kevin Sparenberg, Product Manager, will be talking about what Accidental DBAs should know about all the stuff that goes on inside the Black Box of a database.  I'm going to share with you some of the other things that Accidental DBAs need to think about inside the tables and columns of a database.

 

I'm sure you're thinking "But Karen, why should I care about database design if my job is keeping databases up and running?"  Accidental DBAs need to worry about database design because bad design has significant impacts on database performance, data quality, and availability. Even though an operational DBA didn't build it, they get the 3 AM alert for it.

 

Tricks

People use tricks for all kinds of reasons: they don't fully understand the relational model or databases, they haven't been properly trained, they don't know a feature already exists, or they think they are smarter than the people who build database engines. All but the last one are easily fixed.  Tricky things are support nightmares, especially at 3 AM, because all your normal troubleshoot techniques are going to fail.  They impact the ability to integrate with other databases, and they are often so fragile no one wants to touch the design or the code that made all these tricks work. In my experience, my 3 AM brain doesn't want to see any tricks.

 

Tricky.png

 

Tricky Things

Over my career I've been amazed by the variety and volume of tricky things I've seen done in database designs.  Here I'm going to list just 3 examples, but if you've seen others, I'd love to hear about them in the comments. Some days I think we need to create a Ted Codd Award for the worst database design tricks.  But that's another post...

 

Building a Database Engine Inside Your Database

 

You've seen these wonders…a graph database build in a single table.  A key-value pair (or entity attribute value) database in a couple of tables. Or my favourite, a relational database engine within a relational database engine.  Now doing these sorts of things for specific reasons might be a good idea.  But embracing these designs as your whole database design is a real problem.  More about that below.

 

Wrong Data Types

 

One of the goals of physical database design is to allocate just the right amount of space for data. Too little and you lose data (or customers), too much and performance suffers.  But some designers take this too far and reach for the smallest one possible, like INTEGER for a ZIPCode.  Ignoring that some postal codes have letters, this is a bad idea because ZIPCodes have leading zeros.  When you store 01234 as an INTEGER, you are storing 1234.  That means you need to do text manipulation to find data via postal code and you need to "fix" the data to display it.

 

Making Your Application Do the Hard Parts

It's common to see solutions architected to do all the data integrity and consistency checks in the application code instead of in the database.  Referential integrity (foreign key constraints), check constraints, and other database features are ignored and instead hundreds of thousands of lines of code are used to ensure these data quality features. This inevitably leads to data quality problems.  However, the worst thing is that these often lead to performance issues, too, and most developers have no idea why.

Why Do We Care?

 

While most of the sample tricks above are the responsibility of the database designer, the Accidental DBA should care because:

 

  • DBAs are on-call, not the designers
  • If there are Accidental DBAs, it's likely there are Accidental Database Designers
  • While recovery is job number one, all the other jobs involve actually getting the right data to business users
  • Making bad data move around faster isn't actually helping the business
  • Making bad data move around slower never helps the business
  • Keeping your bosses out of jail is still in your job description, even if they didn't write it down

 

But the most important reason why production DBAs should care about this is that relational database engines are optimized to work a specific way - with relational database structures.  When you build that fancy Key-Value structure for all your data, the database optimizer is clueless how to handle all the different types of data. All your query tuning tricks won't help, because all the queries will be the same.  All your data values will have to be indexed in the same index, for the most part.  Your table sizes will be enormous and full table scans will be very common.  This means you, as the DBA, will be getting a lot of 3 AM calls. I hope you are ready.

 

With applications trying to do data integrity checks, they are going to miss some. A database engine is optimized to do integrity checks quickly and completely. Your developers may not.  This means the data is going to be mangled, with end users losing confidence in the systems. The system may even harm customers or lead to conflicting financial results.  Downstream systems won't be able to accept bad data.  You will be getting a lot of 3 AM phone calls as integration fails.

 

Incorrect data types will lead to running out of space for bigger values, slower performance as text manipulation must happen to process the data, and less confidence in data quality.  You will be getting a lot of 3 AM and 3 PM phone calls from self-serve end users.

 

In other words, doing tricky things with your database is tricky. And often makes things much worse than you anticipate.

 

In Thwack Camp today, sqlrockstar Thomas and Kevin will be covering the mechanics of databases and how to think about troubleshooting all those 3 AM alerts.  While you are attending, I'd like you to also think about how design issues might have contributed to that phone call.  Database design and database configurations are both important.  A great DBA, accidental or not, understands how all these choices impact performance and data integrity.

 

Some tricks are proper use of unique design needs. But when I see many of them, or over use of tricks, I know that there will be lots and lots of alerts happening in some poor DBA's future.  You should take steps to ensure a good design lets you get more sleep.  Let the database engine do what it is meant to do.

TODAY IS THWACKCAMP! Have you registered yet? I will be in Austin this week for the event and doing some live cut-ins as well. Come join over 5,000 IT professionals for two days of quality content and prizes!

 

As exciting as THWACKcamp might be, I didn't let it distract me from putting together this week's Actuator. Enjoy!

 

Ransomware: The race you don’t want to lose

Another post about ransomware which means it's time for me to take more backups of all my data. You should do the same.

 

How to Stream Every NFL Game Live, Without Cable

I had a friend once that wanted to watch a football match while he was in Germany and he used an Azure VM from a US East datacenter in order to access the stream. Funny how those of us in tech know how to get around silly rules blocking content in other countries, like Netflix in Canada, but ti still seems to be a big secret for most.

 

Why lawyers will love the iPhone 7 and new Apple Watch

Everything you need to know about the recent Apple event last week. Even my kids are tuned in to how Apple likes to find ways to get people to spend $159 on something like an AirPod that will easily get lost or damaged, forcing you to spend more money.

 

Delta: Data Center Outage Cost Us $150M

Glad we have someone trying to put a price tag on this but the question that remains is: How much would it have cost Delta to architect these systems in a way that the power failure didn't need to trigger a reboot? If the answer is "not as much", then Delta needs to get to work, because these upgrades will be incremental and take time.

 

Samsung Galaxy Note 7: FAA warns plane passengers not to use the phone

Since we are talking about airlines, let's talk about how the next time you fly your phone (or the phone of the passenger next to you) may explode. Can't the FAA and TSA find a way to prevent these phones from being allowed on board?

 

Discipline: The Key to Going From Scripter to Developer

Wonderful write up describing the transition we all have as sys admins. We go from scripting to application development as our careers progress. In my case, I was spending more time managing my scripts and homegrown monitoring/tracking system than I was being a DBA. That's when I started buying tools instead of building them.

 

What if Star Trek’s crew members worked in an IT department?

Because Star Trek turned 50 years old last week, I felt the need to share at least one post celebrating the series that has influenced so many people for so many years.

 

Presented without comment:

DBA.jpg

Many of my blog posts and live talks focus on the changing nature of storage. Traditional storage architectures are giving way to dispersed arrays and even software-defined storage. And traditional storage arrays are giving way to things that don't really look like storage arrays at all. But what does this mean for the storage administrator? Is this job going the way of the dodo?

 

Storage isn't easy. Sure, it seems like it's just a matter of keeping the disks spinning while applications do the real work, but storage is much more than that. It's all about performance, availability, and advanced data movement features. And the job of the storage administrator is much more than just keeping the lights on until the next array needs to be installed! In fact, it is mastery of the application integration points, from VAAI to TimeFinder, that truly define what it means to be a storage admin.

 

As storage arrays devolve, evolve, and merge with servers, many of the traditional management tasks do disappear. But so much more is left to do! The demise of the traditional storage array isn't the end of a career in storage; it's a moment of liberation!

 

Storage administrators have always wanted to move their focus from hardware operations to higher-level data management tasks. Now is their big chance. VSAN may have no SAN at all, but the data is still there. Cloud storage moves data off-premises but the data is just as important. In many ways, these new technologies make it even more important for a company to have someone looking after their data.

 

Now is the time for that storage administrator to stake out a seat at the application planning meetings and begin talking about issues of data mobility, locality, and availability. These are the traditional topics of the storage industry, but they were too-often submerged in the daily grind of storage performance and basic functionality. Integrated storage systems finally promise to eliminate the tedium of mapping and connecting storage systems, and anyone who's fought with iSCSI or Fibre Channel welcomes that burden being lifted!

 

The career of a storage administrator is not going away. In fact, it's becoming much more important in this software-defined world. Rise up and become data managers!

 

I am Stephen Foskett and I love storage. You can find more writing like this at blog.fosketts.net, connect with me as @SFoskett on Twitter, and check out my Tech Field Day events.

Network Management doesn’t have to be overly complex, but a clear understanding of what needs to be accomplished is important. In a previous blog series I had talked about the need for a tools team to help in this process, a cross functional team may be critical in defining these criteria.

 

  1. Determine What is Important—What is most important to your organization is likely different than that of your peers at other organizations, albeit somewhat similar in certain regards. Monitoring everything isn’t realistic and may not even be valuable if nothing is done with the data that is being collected. Zero in on the key metrics that define success and determine how to best monitor those.
  2. Break it Down into Manageable Pieces—Once you’ve determined what is important to the business, break that down into more manageable portions. For example if blazing fast website performance is needed for an eCommerce site, consider dividing this into network, server, services, and application monitoring components.
  3. Maintain an Open System—There is nothing worse than being locked into a solution that is inflexible. Leveraging APIs that can tie disparate systems together is critical in today’s IT environments. Strive for a single source of truth for each of your components and exchange that information via vendor integrations or APIs to make the system better as a whole.
  4. Invest in Understanding the Reporting—Make the tools work for you, a dashboard is simply not enough. Most of the enterprise tools out there today offer robust reporting capabilities, however these often go unimplemented.
  5. Review, Revise, Repeat—Monitoring is rarely a “set and forget” item, it should be in a constant state of improvement, integration, and evaluation to enable better visibility into the environment and the ability to deliver on key business values.

Learn from the past, live in the present and prepare for the future.

 

While this may sound like it belongs hanging on a high school guidance counselor’s wall, they are, nonetheless, words to live by, especially in federal IT. And they apply perhaps to no other infrastructure element better than the network. After all, the network has long been a foundational building block of IT, and its importance will only continue to grow in the future.

 

It’s valuable to take a step back and examine the evolution of the network. Doing so helps us take an inventory of lessons learned—or the lessons we should have learned; determine what today’s essentials of monitoring and managing networks are; and finally, turn an eye to the future to begin preparing now for what’s on the horizon.

 

Learn from the Past

 

Before the luxuries of Wi-Fi and the proliferation of virtualization, the network used to be defined by a mostly wired, physical entity controlled by routers and switches. Business connections were established and backhauled through the data center. Each network device was a piece of agency-owned hardware, and applications operated on well-defined ports and protocols.

 

With this yesteryear in mind, consider the following lessons we all (should) have learned that still apply today:

 

It Has to Work

 

If your network doesn’t actually work, then all the fancy hardware is for naught. Anything that impacts the ability of your network to work should be suspect.

 

The Shortest Distance Between Two Points is Still a Straight Line

 

Your job as a network engineer is still fundamentally to create the conditions where the distance between the provider of information, usually a server, and the consumer of that information, usually a PC, is as near to a straight line as possible. If you get caught up in quality of service maps, and disaster recovery and continuity of operations plans, you’ve lost your way.

 

Understand the Wizard

 

Wizards are a fantastic convenience and come in all forms, but if you don’t know what the wizard is making convenient, you are heading for trouble.

 

What is Not Explicitly Permitted is Forbidden

 

This policy will actually create work for you on an ongoing basis. But there is honestly no other way to run your network. If you are espousing that this policy will get you in trouble, then the truth is you’re going to get into trouble anyway. Do your part to make your agency network more secure, knowing that the bad guys are out there, or the next huge security breach might be on you.

 

Live in the Present

 

Now let’s fast forward and consider the network of present day.

 

Wireless is becoming ubiquitous, and the number of devices wirelessly connecting to the network is exploding. It doesn’t end there, though—networks are growing, some devices are virtualized, agency connections are T1 or similar services, and there is an increased use of cloud services. Additionally, tablets and smartphones are becoming prevalent and creating bandwidth capacity and security issues; application visibility based on port and protocol is largely impossible due to tunneling, and VoIP is common.

 

The complexity of today’s networking environment underscores that while lessons of the past are still important, a new set of network monitoring and management essentials is necessary to meet the challenges of today’s network administration head on. These new essentials include:

 

Network Mapping

 

When you consider the complexity of today’s networks and network traffic, network mapping and the subsequent understanding of management and monitoring needs has never been more essential than it is today.

 

Wireless Management

 

The growth of wireless networks presents new problems, such as ensuring adequate signal strength and that the proliferation of devices and their physical mobility doesn’t get out of hand. What’s needed are tools such as wireless heat maps, user device tracking and tracking and managing device IP addresses.

 

Application Firewalls

 

Application firewalls can untangle device conversations, get IP address management under control and help prepare for IPv6. They can also classify and segment device traffic; implement effective quality of service to ensure that critical business traffic has headroom; and of course, monitor flow.

 

Capacity Planning

 

You need to integrate capacity for forecasting tools, configuration management and web-based reporting to be able to predict scale and demand requirements.

 

Application Performance Insight

 

The whole point of having a network is to run the applications stakeholders need to do their jobs. Face it, applications are king. Technologies such as deep packet inspection, or packet-level analysis, can help you ensure the network is not the source of application performance problems.

 

Prepare for the Future

 

Now that we’ve covered the evolution of the network from past to present—and identified lessons we can learn from the network of yesterday and what the new essentials of monitoring and managing today’s network are—we can prepare for the future. So, stay tuned for part two in this series to explore what the future holds for the evolution of the network.

 

Find the full article on Federal Technology Insider.

Last week VMWorld took place in Las Vegas, and I was fortunate enough to attend for the second straight year. I love the energy at VMWorld, it is unlike any other conference I attend. The technology, as well as the attendees, appear to be on the cutting edge of enterprise technology. The discussions we have in and around the exhibit hall are worth the price of admission alone. On top of that I am lucky enough to be able to rub elbows with top industry experts and have discussions about the future tech landscape.

 

During VMWorld there was a hashtag on Twitter, #VMworld3word where attendees would use three words to describe VMWorld. That got me thinking about how I might try to describe VMWorld in three takeaways instead of just words. These three items were common themes in the discussions where I took part, or witnessed, last week.

 

You can find lots of articles on  the internet that summarize all the major announcements at VMWorld. That's not what this blog post is for. No, this blog post is my effort to help you understand what I witnessed as common threads even in regards to the major announcements.

 

Storage is King, Maybe

Make no mistake about this, everywhere you looked you found someone talking about storage, storage issues, and storage solutions. Flash is the answer for everything, apparently, even if storage isn't your issue. At one point I swear a storage vendor promised me that their all-flash hyper-converged array would cure my polio. The amount of money being invested in storage vendors may be trending downward, but judging by the exhibit hall floor last week at VMWorld the amount of money being spent on storage product development and marketing remains high.

 

One aspect that these storage vendors seem to either be forgetting, or just not talking about, is the Hybrid IT story. It would seem that the Cloud is a bit of a threat to these vendors, as they are finding it harder to sell their wares to enterprise customers and instead must start focusing on building partnerships with Microsoft and Amazon if their products are to remain relevant. Unfortunately, those Cloud giants rely on commodity hardware, not specialty hardware, which means to me that the storage gravy is just about over. Let's face it, Microsoft isn't about to order a million hyper-converged arrays anytime soon.

 

The last point I want to make about storage is that their seems to be a mindset that storage is the main bottleneck. Many vendors seem to forget that the network plays an important role in getting data to and from their storage devices. The network seems to be an area where storage vendors just put their hands up and say "that's not us". This is especially true when we talk Hybrid IT as well.

 

Correlated Monitoring is Lacking

Whenever I had the chance to talk monitoring to vendors and attendees it was clear to me that correlation of metrics and events is something that is lacking in the industry. I believe this to be true for two reasons. First, everyone makes dashboards that show metrics, usually related by resource (disk, CPU, memory, and sometimes basic network stats), and everyone admits that such dashboards are not very good at telling you a root cause. Second, the look on the faces of data professionals when I show them the main virtualization screen for DPA. Once they see that stacked view that allows them to see issues at the storage, host, guest, or database engine layer, the initial reaction was "take my money". It is as if no one on the market is presenting such data in a correlated way.

 

That's because vendors have spent years building tools that report metrics but do not report meaning. The latest trend now is machine learning and predictive analytics in order to give insight but the reality is you don't need a lot of fancy algorithms behind the scenes in order to do 80% of your work. What you need is for someone to provide you a group of metrics, across your infrastructure, that show the relationships between entities. In other words, is there is an issue with a LUN, can you quickly see what datastores, hosts, VMs, databases, and applications might be affected? For many vendors the answer is "no".

 

Accidental Cloud DBAs

Given my background and role I naturally gravitated towards conversations with DBAs last week. Storage and correlated monitoring were two of the topics we talked about. The third topic centered around how traditional DBAs today have little to no insight (or knowledge) of how networks work. But with Hybrid IT the reality for most, the data professionals I spoke with last week acknowledged that they needed to know more about networks, network topology, and how to troubleshoot network performance quickly.

 

Think about this for a moment. When you company starts using cloud resources, and someone calls your desk saying "the app is slow", are you able to quickly look to determine if the issue is related to the network? I tend to do my performance troubleshooting in buckets. It goes like this: either something is in this bucket, or in that bucket. If the app is slow then I want to quickly determine if it is a network issue or not. If it is network, then work with the team(s) that can fix the issue. If it is not network, then I know it is likely something I need to fix as the DBA, and I get to work.

 

But I certainly don't want to lose hours of my life trying to fix an issue with the application that doesn't exist. The Cloud DBA will also serve as an accidental network administrator as more companies adopt a Hybrid IT strategy.

 

There you have it, my three takeaways from a fabulous week in Las Vegas. Oh, and here's my #VMWorld3word for you: Fall out boy.

 

FullSizeRender.jpg

arjantim

The Software Defined Era

Posted by arjantim Sep 12, 2016

In the last couple of years the business is constantly asking IT departments how the public cloud can provide services that are faster, cheaper and more flexible than the in house solutions. I’m not going to argue with you if this is right or not, it is what I hear at most of my customers and in a couple of cases the answer seams to be automation. The next-gen data centers that leverage a software-defined foundation, use high levels of automation to control and coordinate the environment, enabling service delivery that will meet business requirements today and tomorrow.

 

For me the software-defined data center (SDDC) provides an infrastructure foundation that is highly automated for delivering IT resources at the moment they are needed. The strength of  SDDC is the idea of abstracting the hardware and enabling its functionality in software. Due to the power of hardware these days, it’s possible to use a generic platform with specialized software that enables the core functionality, whether for a network switch or a storage controller. Network, for example, were once specialized hardware appliances; today, they more and more are virtualized within the virtual with specialized software. Virtualization has revolutionized computing and allowed flexibility and speed of deployment. In the IT infrastructures of these days, virtualization enables both portability of entire virtual servers to off-premises data centers for disaster recovery and local virtual server replication for high availability. What used to require specialized hardware and complex cluster configuration can now be handled through a simple check box.

 

By applying the principles behind virtualization of compute to other areas such as storage, networking, firewalls and security, we can use its benefits throughout the data center. And it’s not just virtual servers: entire network configurations can be transported to distant public or private data centers to provide full architecture replication. Storage can be provisioned automatically in seconds and perfectly matched to the application that needs it. Firewalls are now part of the individual workload architecture, not of the entire data center, and this granularity against threats inside and out, yielding unprecedented security. But what does it all mean? Is the SDDC some high-tech fad or invention? If you ask me: absolutely not. The SDDC is the inevitable result of the evolution of the data center over the last decade.

 

I know there is a lot of marketing fluff around the datacenter, and Software Defined is one of them, but for me the SDDC is for a lot of companies the perfect fit for this time. What the future will bring, who knows where we stand in 10 years! The only thing we know is that a lot of companies are struggling with the IT infrastructure and need help in bringing the environment to the next level. SDDC is a big step forward for most (if not all) of us, and call it what you like but I’ll stick to SDDC

It's time to go to camp! You're probably thinking to yourself, "But Tom, summer's over. Camp is through. I have to go back to my boring day job."

That's not true at all! You've still got one more chance to go to camp with a group of geeks that you'll fit in with just fine. Solarwinds THWACKCamp is next week, and it's virtual! You can grab some S'mores, sit around the warm glow of your monitor, and commiserate with great speakers like Amy Lewis, Patrick Hubbard, and Stephen Foskett!

This is a chance for you to learn more about hot topics in IT. Not just the typical discussions about how SDN is going to take your job or how the cloud is the most awesome thing ever. No, these are really in-depth discussions about topics that matter to you. Like about how you shouldn't hate your monitoring system. Or how you can get started with the fundamentals of network security. There's even a flash storage panel! These are the topics that matter to the people in the trenches fighting to keep the network alive and running each day. And even if you find yourself wearing the DBA hat now and then, THWACKCamp has you covered there too.

The state of IT in 2016 is in flux more than any other time in the history of computing. Software is eating the world and proving that expensive custom hardware doesn't rule the kingdom any longer. The cloud is forcing infrastructure teams to look at their budgets and make hard choices. The cloud is also forcing application developers to change the way they write apps and never assume that something isn't going to be running all over the world at all hours of the day and night. The push toward automation and orchestration of the data center means that IT pros need to be making smarter decisions about the way things are planned so they aren't spending hours and hours fixing failures in production.

What does this state of flux mean for you? It means you need to get a leg up on everything you need to know to meet these challenges head on. That could mean sleepless nights combing through documentation for little gain. Or it could mean flying across the country to hear some "expert" drone on in an uncomfortable convention center about what their company vision is to get you to buy more things.

Instead, why don't you spend $0 and sit in the comfort of your office chair or couch and participate in THWACKCamp? You can learn about what you need to know without the need to travel or expend a fortune for no gain. Not enough? How about getting more than you bargained for? Because every THWACKCamp session has a drawing for a free fully licensed Solarwinds tool! You can't beat a deal like that!

Stop worrying about the future of IT and do something to be a part of it! Sign up for THWACKCamp today. It costs nothing and provides more than you could have bargained for. Good discussions, important topics, and a chance to get some free tools. You have nothing to lose, so sign up now!

I’ve been in a customer facing role for the last seven years. My first role as a Pre-Sales Architect came after years of running an architecture internal to a large insurance company.

 

There were many adjustments I needed to make in order to be successful in this new role. Some came easily to me, most notably the empathy which is required to support an internal IT role. I’d been there for years, so my abilities in this respect simply came naturally to me. There were others that didn’t come quite so smoothly. In this, I’d have to say the most difficult for me to achieve had to probably be the desire to influence the buyer that my solution was the most ideal for their needs. In some cases, surely, I did have the ideal solution, but in others, a bit of shoehorning needed to take place for this to be accomplished. I did have some philosophical problems pushing a somewhat less than ideal solution to my customers.

 

Taking care of the needs of my customers had always been the first priority toward which I focused my efforts. Arriving at the best solution, regardless of vendor, is and was paramount.

 

At what point, though, does the specific need the customer reflects, outweigh the benefits?

 

In some ways, a simple cost-benefit-analysis is all it takes. But that may be putting too simple a formula to the complexity of a decision like this. The requirements of a customer who’s not appropriate for your customer base, demands too much time, attention, or effort, with not enough pay-off to the company for whom you work could expedite the simple formula. But that’s easily too obvious an answer.

 

We could go on and on about a customer who’s unwilling or unable to pay their bills. This is again, a clear decision. In these cases, to state, “We’ll help, but we must change the pay dynamic” is appropriate. How about making things Net 10, or payable upon service? We must be considerate of their unique set of circumstances. However, this is a business, and to cut off a customer, as they’ve stopped being mutually beneficial, is quite possibly a little too self-serving. More must be entered into that equation. For example, do they want to work with you exclusively? Is this extended effort a one-time research project, or is it every single time?

 

My personal biggest frustration is when a new customer has placed us in the position of building an architecture for their needs, refining and refining, then takes the configuration and uses it for guidance on retrieving a competitive bid elsewhere. While we may have some slight advantage due to having registered the bid initially, we are a fully functional service provider. Our capacities extend far beyond the mere quoting of hardware. Our expertise simply in designing a quality architecture should, by all rights, give us some reasonable leg up. If a customer is buying just based on pricing, there are plenty of reputable options. Let’s all try to interact with integrity, please?

 

I think that what’s clear here is that these decisions must be made cautiously, on a case by case basis. What you don’t want to do, specifically, is throw away a good relationship. What you must do is be aware that a beneficial relationship can be forever lost.

I'm back from Las Vegas and VMworld. I loved the energy at the event last week. It was great talking with vendors and attendees about technology and events. There were a handful of announcements, of course, but for me the real excitement is when you can sit down and talk to others that are using technology in interesting ways to solve interesting problems.

 

Anyway, here are the items I found most amusing from around the Internet. Enjoy!

 

VMware Reassembles Cloud Stack as New VMware Cloud Foundation

The key point in this article has to do with how the Cloud Foundation is built upon existing cloud providers such as Azure, AWS, and Google. Going forward, that's the model I see everyone adopting, the leveraging of existing cloud and not trying to build your own.

 

Nashville Hotel Suffered POS Breach For Three Years

Three. Years. That's not just a breach, that's gross negligence. Then again, considering how data breaches are so prevalent these days, maybe it's just average negligence.

 

AI, Machine Learning, and The Hitchhiker’s Guide

Nice summary of machine learning versus artifical intelligence, with a touch of Douglas Adams for good measure.

 

What Facebook Data Center Team Learned from Shutting Down Entire Facilities

Back in the day, when I was involved in DR testing, we just did failovers. We didn't power down an entire data center! Hat tip to Facebook for going through this pain, and for helping to make the world a better place, one cat video at a time.

 

The Self-Driving Car Race

I was thinking about self-driving cars while in Las Vegas last week, and a report that Singapore is deploying them as taxis in a restricted zone of the city. I believe that self-driving cars are going to be here sooner than we might think, and the tech behind them will be used elsewhere.

 

Autonomous Tractor Concept Takes The Farmers Out Of Farming

For example, self-driving tractors. There's no reason why we can't automate these as well as cars.

 

NASA to develop rules for flying small drones

And then we have drones. There is no reason they can't be autonomous as well. And once they get big enough to carry hundreds of pounds of cargo? Flying cars, dear reader.

 

When I arrived in Las Vegas, this was my assigned taxi stand number, which made me think "maybe I shouldn't gamble this week":

 

IMG_4343.JPG

The advantage is firmly in the hands of the attackers right now. The number of easy to use tools available and the speed that new vulnerabilities are incorporated into these tools greatly outpaces the speed that most organizations can stay on top of the threats. No matter how many precautions you have taken, a breach, or incident will occur. You should operate under the assumed breach mentality. What are you going to do now?

 

Data centers are particularly juicy targets for attackers because there are so many different systems consolidated in a single place. Fortunately, the physical security of data centers is usually strong. Unfortunately, when you evaluate the digital security of data centers, we are far behind.

 

One lesson we can take from physical security principles is response: if someone were to physically attempt a breach, the plan would be clear. What’s your response after detecting a cyber-incident?

 

The Technical Response

 

For the technical response, one of the biggest questions is: do you shut down the attacker or monitor their activity? There are pros and cons for both approaches, but your organization needs to have a clear plan before the incident.

 

Let’s say you notice a large amount of traffic exiting your data center from a server that running an unauthorized FTP service. If you disable the service immediately, will you be able to determine the full extent that you are compromised? The attacker may still have access, and this will also cause them to go underground. If your policy is to monitor the attacker, how long do you do that and how can you wall off the attacker from gaining access to other systems?

 

Federal incident notification guidelines have been established by DHS/US-CERT, and there use is mandated by FISMA. US-CERT will work with agency IT personnel to analyze threats, exchange critical information with trusted partners, and engage cyber defense resources, as appropriate. Agencies also need to follow their departmental policies.

 

Breaches bring IT front and center to agency executives and have an immediate and often long lasting impact to agency operations. When a security breach occurs, how you respond can make all the difference. If you have a well-structured incident response plan, you can mitigate much of the damage of an attack.

 

A comprehensive incident response plan needs to address the different types of incidents an agency could encounter. Roles and responsibilities of the response team need to be assigned and communicated, and back-ups need to be identified. Other important parts of your plan include establishing a communication decision tree, as well as incident response procedures. And don’t forget regular testing and updating. Quarterly exercises can make sure staff know how to respond, find flaws in the plan, and lead to updating it accordingly.

 

Investment in prevention is necessary, but insufficient. If you don’t have a well-defined incident response plan in addition to those prevention solutions, then you aren’t doing enough to secure your data centers and critical facilities.

 

Find the full article on our partner DLT’s blog, TechnicallySpeaking.

In my time at Tech Field Day, I've heard a lot of discussion about monitoring products. Sometimes these talks get contentious, with folks pointing out that a certain feature is "useless" while another (usually missing) one is absolutely critical for a product to be taken seriously. Then one day it clicked: Monitoring isn't one thing; there are lots of different tasks that can be called "monitoring"!

Diff'rent Strokes

Let's consider storage. Administrators are concerned with configuration and capacity. IT management is worried about service levels and cost. Operations worries about failures and errors. The vendor has a whole set of parameters they're tracking. Yet all of these things could be considered "monitoring"!

Realizing that monitoring means different things to different people, I've come to look at monitoring tools differently too. And it's really brought them into focus for me. Some tools are clearly designed for IT, with troubleshooting and capacity planning as the focus. Others are obviously management-focused, with lots of pretty charts about service level objectives and chargeback. And so on.

Given the diversity of these tools, it's no wonder that they appear controversial when viewed by people with different perspectives. Systems administrators have a long-standing disdain for cost accounting tools, so it's no wonder they flinch when presented by a tool that focuses on that area. And they'd be sorely disappointed if such a monitoring package didn't show hardware errors, even though these wouldn't be appropriate for an IT management audience.

So What Cha Want?

Without getting too "zen" I think I can safely say that one must look inside oneself before evaluating monitoring products. What features do you really need? What will help you get your job done and look like a superstar? What insight are you lacking? You must know these points first before you even consider looking at a flashy new product.

And some products sure are flashy! I'll admit, I've often been sucked in by a sleek, responsive HTML5 interface with lots of pretty graphics and fonts. But sometimes a simple text-mode tool can be much more useful! Never underestimate the power of "du -sk * | sort -n", "iostat -x 60", and "df` -h"! But high-level tools can be incredibly valuable, too. I'd rather have a tool surface the critical errors than try to "awk" my way through a log file...

Consider too whether you're ready to take advantage of the features offered. No amount of SLO automation will help you develop the SLA's in the first place. And does any company really have an effective cost model? The best tools will help you build understanding even if you don't have any starting inputs.

Stephen's Stance

I can't tell you how often I've tried out a new monitoring tool only to never return to look at the output. Monitoring isn't one thing, and the tools you use should reflect what you need to accomplish. Consider your objectives and look for tools that advance them rather than trying out every cool tool on the market.

 

I am Stephen Foskett and I love storage. You can find more writing like this at blog.fosketts.net, connect with me as @SFoskett on Twitter, and check out my Tech Field Day events.

Filter Blog

By date:
By tag: