Skip navigation

Geek Speak

4 Posts authored by: agnostic_node1

In my Linkedin Profile, I write that I’m a fan of “elaborate IT Metaphors” yet, in a very literal way, I’ve never actually written a list of my favorites.

 

Listing out my favorite IT metaphors, sayings, aphorisms and such is risky. Too much pithiness, and I risk not being taken seriously. Too much cynicism and no one wants to talk to you.

 

And yet I must take that risk, because if you’re a practitioner of the IT arts as I am, then you’re used to engaging in these sorts of thoughtful/silly/humorous reflections.

 

Or maybe you don't but will find humor and value in them nonetheless. Enjoy and if you like, add your own!

 

MetaphorMeaningUsed in SentenceOrigin/Notes
Dark side of the moonWaiting for a host or device to reply to pings after reload/rebootServer's gone dark side of the moon, standby.NASA obviously
Eat our own dogfoodApplying same policies/tech/experience to IT that apply to usersThat dogfood tastes pretty nasty and we've been dishing it out to our users for yearsNot sure but heard on TWiT
DNS is like a phonebookComputers speak in numbers humans speak in wordsLook, it works like a phonebook, alright? Do you remember those things?My own metaphor to explain DNS problems
Fat fingerA stupid mistake in perhaps an otherwise solid plan (eg IP address keyed incorrectly)Jeff totally fat fingered itFormer boss/Homer Simpson?
Go FQDN or Go HomeAdmonishment to correct lazy IT tendency to code/build with IP addresses rather than FQDNThere are to be no IP addresses listed on the support page. Go FQDN or Go Home, son.My own
Garbage in Garbage OutYou get out of a system that which you put inI can't make beautiful pivots with a GIGOed setUnknown but s/he was brilliant
Debutante in the DatacenterA server or service that is high-profile/important but prone to failure and drama without constant attentionHyperion is doing its debutante thing againHeard it from someone somewhere in time
Cadillac SolutionA high-priced solution to a problem that may require only ingenuity/dilligenceDon't come to me with a Cadillac solution when a Kia will doMy own but really…Cadillac…I'm so old
The Sun Never Sets on InfrastructureA reference to the 24/7 nature of Infrastructure stack demand by way of the British EmpireAnd the sun never sets on our XenApp farm, so schedule your maintenanceI used this metaphor extensively in last job
Infrastructure is Roads/Applications are cars/Users are driversReference to the classic split in IT departmentsSee hereFormer colleague
Two Houses Both Alike in DignityAnother reference to AppDev & Infrastructure divide in IT-My own liberal abuse of Shakespeare's opening line in R&J
Child Partition/Parent PartitionReference to me and my son in light of Hypervisor technologyChild partition is totally using up all my spare compute cyclesMy own
Code is poetryThere is something more to technology than just making things work/be an aristan technologistJust look at that script, this guy's a poet!Google but adapted by me for scripting and configs
Going full FibonacciThe joy & euphoria inherent in a well-designed subnetting or routing plan wherein octets and route summaries are harmonized & everything just fitsHe went full Fibonacci to save memory on the routerMy own abuse of the famedFibonacci Sequence which honestly has nothing to do with IP subnetting and more to do with Dan Brown. Also applies to MAC address pools because encoding your MAC address pools is fun
Dystopian ITDysfunctional IT departmentsI thought I was going to work in the future, not some dystopian nightmare IT group!Not sure
When I was a Child I thought as a ChildHow I defend poor technical decisions that haunt me years later-A (perhaps blasphemous) homage to St. Paul
There are three ____ and the greatest of these is ____Another St. Paul referenceAnd then there were three storage systems: file, block and object, and the greatest of these is fileUseful in IT Purchasing decisions
IT White WhaleHighly technical problems I haven't solved yet and obsess overI've been hunting this white whale for what seems like foreverBorrowed from Herman Melville's Moby Dick
Servers are cattle, not petsA pithy & memorable phrase to remind systems guys not to get attached to physical or virtual servers, to view them as cattle that are branded at birth, worked hard throughout life, then slaughtered without pomp or circumstance. No more Star Wars references as  server names, ok? It's a cow, not your pet labrador! The guys who built Chef
Drawer of TearsThe drawer in your desk/lab where failed ideas -represented by working parts- go. My drawer of tears is filled with Raspberry Pis/Surface RTs etc Yeah I tried that once, ended up in the drawer of tearsMy own

"Do you think the guys running Azure or AWS care if a server gets rebooted in the middle of the day?" I asked the Help Desk analyst when he protested my decision to reboot a VM just before lunch.

 

"Well, uhh. No. But we're not Azure," He replied.

 

"No we're not. But we're closer today than we have ever been before. Also, I don't like working evenings." I responded as I restarted the VM.

 

The help desk guy was startled, with more than a little fear in his voice, but I reassured him I'd take the blame if his queue was flooded with upset user calls.

 

Such are the battles one has to fight in IT Environments that are stuck in what I call the Old Ways of IT. If you're in IT, you know the Old Ways because you grew up with them like I did, or because you're still stuck in them and you know of no other way.

 

The Old Ways of doing IT go something like this:

 

  • User 1 & User 2 call to complain that Feature A is broken
  • Help desk guy dutifully notes feature A is busted, escalates to Server Guy
  • Server Guy notices Feature A is broken on Server A tied to IP Address 192.168.200.35, which is how User 1 & User 2 access Feature A
  • Server Guy throws up his hands, says he can't fix Server A without a Reboot on Evening 1
  • Help Desk guy tells the user nothing can be done until Evening 1
  • User1 & User 2 hang up, disappointed
  • Server Guy fixes problem that evening by rebooting Server A

 

I don't know about you, but working in environments stuck in the Old Ways of IT really sucks. Do you like working evenings & weekends? I sure don't. My evenings & weekends are dedicated to rearing the Child Partition and hanging out with the Family Cluster, not fixing broke old servers tied to RFC-1918 IP addresses.

 

As the VM rebooted, my help desk guy braced himself for a flood of calls. I was tempted to get all paternalistic with him, but I sat there, silent. 90 seconds went by, the VM came back online. The queue didn't fill up; the help desk guy looked at me a bit startled. "What?!? How did you...but you rebooted...I don't understand."

 

That's when I went to the whiteboard in our little work area. I wanted to impart The New Way of Doing IT upon him and his team while the benefits of the New Way were fresh in their mind.

 

"Last week, I pushed out a group policy that updated the url of Feature A on Service 1. Instead of our users accessing Service 1 via IP Address 192.168.200.35, they now access the load-balanced FQDN of that service. Beneath the FQDN are our four servers and their old IP addresses," I continued, drawing little arrows to the servers.

 

"Because the load balancer is hosting the name, we can reboot servers beneath it at will," the help desk guy said, a smile spreading across his face. "The load balancer maintains the user's session...wow." he continued.

 

"Exactly. Now  you know why I always nag you to use FQDN rather than IP address. I never want to hear you give out an IP address over the phone again, ok?"

 

"Ok," he said, a big smile on his face.

 

I returned to automating & building out The Stack, getting it closer to Azure or AWS.

 

The help desk guy went back to his queue, but with something of a bounce in his step. He must have realized -the same way I realized it some years back- that the New Way of IT offered so much more the the Old Way. Instead of spending the next 90 minutes putting out fires with users, he could invest in himself and his career and study up a bit more on load balancers. Instead of rebooting the VM that evening (as I would have had him do it), he could spend that evening doing whatever he liked.

 

As cliche as it sounds, the new way of IT is about working smarter, not harder, and I think my help desk guy finally understood it that day.

 

A week or two later, I caught my converted help desk guy correcting one of his colleagues. "No, we never hand out the IP address, only the FQDN."

 

Excellent.

When it comes to the enterprise technology stack, nothing has captured my heart &  imagination quite like enterprise storage systems.

 

Stephen Foskett once observed that all else is simply plumbing, and he’s right. Everything else in the stack exists merely to transport, secure, process, manipulate, organize, index or in some way serve & protect the bytes in your storage array.

 

But it's complex to manage, especially in small/medium enterprises where the storage spend is rare and there are no do-overs. If you're buying an array, you've got to get it right the first time, and that means you've got to figure out a way to forecast how much storage you actually need over time.

 

I've used Excel a few times to do just that: build a model for storage capacity. Details below!

 

Using Excel to model storage capacity

Open up Excel or your favorite open-sourced alternative, and input your existing storage capacity in its entirety. Let’s say you have –direct, shared, converged, or otherwise- 75 terabytes of storage in your enterprise.

 

Now calculate how much of that is available for use both in absolute terms (TBs!) and as a percentage of used, committed space. Of that 75TB, maybe you've got 13 terabytes left  as usable capacity. Perhaps some of that 13TB is in a shared block or NFS storage array; perhaps all that remains for your use is direct-attached storage inside production servers. If the latter, you need to find a way to reflect the difficulty you’ll have in using the free capacity you have.

 

Now array that snapshot of your existing storage (75TB, 13TB free, 83% Committed) in separate columns, and in a new column, populate Month 1, 2, 3 all the way out to 36 or 72. What's your storage going to look like over time?

 

The reality is that unless you work in one of those rare IT environments that’s figured out the complex riddle of data retention, most IT environments will see storage demand grow over time. Use this to your advantage.

 

Sometimes that growth is predictable, other times it’s not. Reflect both scenarios in the same column. Assume months 1-12 there will be a .3% demand for more storage from your 13TB of remaining storage (about 39 GB every month).

 

In month 13, imagine that some big event happens; a new product launch or merger/acquisition, etc, and demand that month is extreme: the business needs about 15% of what (by then is) about 12.7TB of your storage. If you’re in a place that only has direct-attached storage, this might be the month your juggling act becomes especially acute; what if none of your servers has 2 Terabytes of free space?

 

By now your storage capacity model should be taking shape. You can plug in different categories of storage (Scale up your S3 or Azure Blob storage to meet sudden spikes in demnad, for instance,) you can contemplate disposal of some of your existing legacy storage too.

 

storagedemand.png

Once all your model is finished, you should be able to make a nice chart to present to your CIO, a model that says you've done your homework answering the question "How much storage do we need?"

 

"I know real estate applications," the Business Analyst said confidently to the dozen of us cloistered in a conference room.

 

"And real estate applications don't work well when they're virtualized," she insisted, face lowered, eyes peering directly at me over the rims of her Warby Parkers.

 

For a good 5-8 seconds, all you could hear was the whirring of the projector's fan as the Infrastructure team soaked in the magnitude of the statement.

 

I had come prepared for a lot of things in this meeting. I was asking for a couple hundred large, and I had spreadsheets, timelines, budgets, and a project plan. Hell, I even had an Excel document showing which switch port each new compute node would plug into, and whether that port would be trunked, access, routed, a member of a port-channel, and whether it got a plus-sized MTU value. Yeah! I even had my jumbo frames all planned & mapped out, that's how I roll into meetings where the ask is a 1.x multiple of my salary!

 

But I had nothing for this...this...whatever it was...challenge to my professional credibility? An admission of ignorance? Earnest doubt & fear? How to proceed?

 

Are we still fighting over whether something should be virtualized?

 

It was, after all 2014 when this happened, and the last time I had seen someone in an IT Department resist virtualization was back when the glow of Obama was starting to wear off on me....probably 2011. In any case, that guy no longer worked in IT (not Obama, the vResistor!), yet here I was facing the same resistance long long long after the debate over virtualization had been settled (in my opinion anyway).

 

Before I could get in a chirpy, smart-ass "That sounds like a wager" or even a sincere "What's so special about your IIS/SQL application that it alone resolutely stands as the last physical box in my datacenter?" my boss lept to my defense and, well, words were exchanged between BAs, devs, and Infrastructure team members. My Russian dev friend and I glanced at each other as order broke down...he had a huge Cheshire cat grin, and I bet the bastard had put her up to it. I'd have to remember to dial the performance on his QA VMs back to dev levels once if I ever got to build the new stack.

 

The CIO called for a timeout, order was restored, and both sides were dressed down as appropriate.

 

It was decided then to regroup one week hence. The direction from my boss & the CIO was that my presentation while, thorough, was at 11 on the Propellerhead scale and needed to answer some basic questions like, "What is virtualization? What is the cloud?"

 

You know, the basics.

 

Go Powerpoint or Go Home

 

Somewhat wounded, I realized my failure was even more elemental than that. I had forgotten something a mentor taught me about IT, something he told me to keep in mind before showing my hand in group meetings: "The way to win in IT is to understand which Microsoft Office application each of your teammates would have been born as if they had been conceived by the Office team. For example, you're definitely a Visio & Excel guy, and that's great, but only if you're in a meeting with other engineers."

 

Some people, he told me, are amazing Outlookers. "They email like it's going out of style; they want checklists, bullet points, workflows and read receipts for everything. Create lots of forms & checklists for them as part of your pitch."

 

"Others need to read in-depth prose, to see & click on footnotes, and jot notes in the paper's margin;  make a nice .docx the focus for them."

 

And still others -perhaps the majority- would have been born as a Powerpoint, for such was their way of viewing the world. Powerpoint contains elements of all other Office apps, but mostly, .pptx staff wanted pictures drawn for them.

 

So I went home that evening and I got up into my Powerpoint like never before. I built an 8 page slide deck using blank white pages. I drew shapes, copied some .pngs from the internet, and made bullet points. I wanted to introduce a concept to that skeptical Business Analyst who nearly snuffed out my project, a concept I think is very important in small to medium enterprises considering virtualization.

 

I wanted her to reconsider The Stack (In Light of Some Really Bad Visualizations).

 

So I made these. And I warn you they are very bad, amateur drawings, created by a desperate virtualization engineer who sucks at powerpoint, who had lost his stencils & shapes, and who was born a cell within a certain column on a certain row and thought that that was the way the world worked.

 

The Stack as a Transportation Metaphor

 

Slide 2: What is the Core Infrastructure Stack?  It's a Pyramid, with people like me at the bottom, people like my Russian dev friend in the middle, and people like you Ms. Business Analyst, closer to the top. And we all play a part in building a transportation system, which exists in the meatspace (that particular word was not in the slide, and was added by me, tonight). I build the roads, tunnels & bridges, the dev builds the car based on the requirements you give him, and the business? They drive the car the devs built to travel on the road I built."

 

Also, the pyramid signifies nothing meaningful. A square, cylinder, or trapezoid would work here too. I picked a pyramid or triangle because my boy would say "guuhhhl" and point at triangles when he saw them.

 

stack1.png

I gotta say, this slide really impressed my .pptx colleagues and later became something of an underground hit. Truth be told, inasmuch as anything created in Powerpoint can go viral, this did. Why?

 

I'd argue this model works, at least in smaller enterprises. No one can argue that we serve the business, or driver. I build roads when & where the business tells me to build them. Devs follow, building cars that travel down my roads.

 

But if one of us isn't very good at his/her job, it reflects poorly on all of IT, for the driver can't really discern the difference between a bad car & a bad road, can they?

 

What's our current stack?

 

"Our current stack is not a single stack at all, but a series of vulnerable, highly-disorganized disjointed stacks that don't share resources and are prone to failure," I told the same group the next week, using transitions to introduce each isolated, vulnerable stack by words that BAs would comprehend:

 

multiplestacks.png

My smart ass side wanted to say "1997 called, they want their server room back," but I wisely held back.


"This isn't an efficient way to do things anymore," I said, confidence building. No one fought me on this point, no one argued.

 

What's so great about virtualizing the stack?

 

None of my slides were all that original, but I take some credit for getting a bit creative with this one. How do you explain redundancy & HA to people woefully unprepared for it? Build upon your previous slides, and draw more boxes. The Redundant Stack within a Stack:


redundantstack.png

The dark grey highlighted section is -notwithstanding the non-HA SQL DB oversight- a redundant Application Stack, spread across an HA Platform, itself built across two or more VMs, which live on separate physical hosts connecting through redundant core switching to Active/Active or A/P storage controllers & spindles.


I don't like to brag (much), but with this slide, I had them at "redundant." Slack-jawed they were as I closed up the presentation, all but certain I'd get to build my new stack and win  #InfrastructureGlory once more.


And that Cloud Thing?

 

Fuzzy white things wouldn't do it for this .pptx crowd. I struggled but to keep things consistent, I built a 3D cube that was fairly technical, but consistent with the previous slides. I also got preachy, using this soapbox to remind my colleagues why coding against anything other than the Fully Qualified Domain Name was an mortal sin in an age when our AppStack absolutely required being addressed at all times by a proper FQDN in order to to be redundant across datacenters, countries, even continents.

 

hybridstack.png

There are glaring inaccuracies in this Hybrid Cloud Stack, some of which make my use of Word Art acceptable, but as a visualization, it worked. Two sides to the App Stack, Private Cloud (the end-state of my particular refresh project), and the Public Cloud. Each have their strengths & weaknesses, each can be used by savvy Technology teams to build better application stacks, to build better roads & cars for drivers in the business.


About six weeks (and multiple shares of this .pptx) later, my new stack complete with 80 cores (with two empty sockets per node for future-proofing!), about 2TB of RAM, 40TB of shared storage, and a pair of Nexus switches with Layer 3 licensing arrived.

 

And yes, a few weeks after that, a certain stubborn real estate application was successfully made virtual. Sweet.

Filter Blog

By date: By tag: