1 2 3 4 Previous Next

Geek Speak

1,573 posts

I have a confession: I like documentation.

 

I know.

 

But I have good reasons. Well; some reasons are better than others. I like to type (not really a good reason); I like to write (although I'd argue I'm not all that great at it); and I like to have good references for projects I've done that will hopefully be useful to any future person--or myself--who has to support what I've put together (OK, this is actually a good one).

 

When talking about data projects or DBA-related work, reference documentation can contain a lot of nitty-gritty information. There are two specific areas that I think are most important, however. The first of these is useful in the case of Business Intelligence projects, and usually takes the form of a combination Data Dictionary/Source-to-Target Mapping listing. The other is in the vein of my post from last week wherein I discussed asking the question of "why" during the early stages of a project. Even having only these "why" question and answers written down can and will go a long way towards painting the picture of a project for the future person needing to support our masterpieces.

 

As good as it truly is for all people involved in a project, writing reference documentation is something that isn't necessarily easy or a lot of fun to do. One of the most frustrating things about it is that in a lot of instances, time to write this documentation isn't included in the cost or schedule of the project; especially, in my experience, if it's following something like the Scrum methodology. In the case of a "day job", spending time on documentation may not have the same immediate need, as the people who participated on the project theoretically aren't going anywhere. Furthermore, people spending that time takes time away from other work they could be doing. In the case of an outside consultancy, paying an outside firm to sit around and write documentation has an obvious effect on the project's bottom line.

 

If I had to guess, I'd say most of you readers out there aren't as into documentation as much as I am…I really do understand. But at the same time, what do you do for the future? Do you rely on comments in code or something like Extended Properties on database objects?

And, for those of you that do attempt to provide documentation… do you have to make special efforts to allow the time to create it? Any secrets to winning that argument? ;-)

scuff

Can you recommend ...

Posted by scuff Jun 10, 2015

 

That has to be one of my most feared phrases … “Can you recommend …?” Often the person is actually saying “can you make the buying decision for me?” To which the answer is "no, I can’t make up your mind for you. Would you ask me to choose your car or your house?"

 

The easiest way to keep IT people busy is to lock us in a room until we can agree on the best anti-virus software. 

 

The problem with technology is there is no ‘Best’. There’s only the ‘best for you’. You can be guided to making that decision based on input from others but here’s the thing … that input is going to be coloured by their experience and the experiences of their peers.

 

And as technology grows and the capabilities between different products start to blur, you can start to stare at things that basically do the same thing. So how do you choose? Or how do you recommend what’s truly best for the business, putting personal experience aside? Should you put personal experience aside?

 

Put your hand up if you’ve ever seen a company throw out one software vendor because a new manager has come in and they prefer something else? Exchange versus Lotus Notes. Microsoft versus Google. On premise versus Cloud. Mac vs PC. All of these decisions are influenced by the previous experiences of the decision makers.

 

My tactic is to squeeze as much information out of them as I can about how they work, what they want to achieve etc and then say “Well this is what I’d do if I was you…”  But I can almost guarantee someone else will have a different opinion. They’re like bellybuttons. Everybody’s got one!

 

How do you handle the recommendations question? Do you weigh up all of the available options for them or go on past experience?

 

Vegaskid

The human factor

Posted by Vegaskid Jun 8, 2015

In my last article (Defence in depth), I wrote about a number of different approaches that should be considered for a defence in depth security model. In this article, I go in to a little more depth on a topic which is perhaps the most exciting for me, but also one of the hardest to fully mitigate against, the human factor.

 

Imagine a fantasy world where security vendor's claims that their product can protect you against the bad guy's complete technological arsenal. Every time they try to infiltrate your network, either from the outside or within, they are detected and blocked with no impact on your resources. I did ask you to your imagination! That's an unlikely scenario as I'll discuss in an upcoming post but even if it could come to fruition, if your human resources have not been trained to a suitable level in InfoSec, then there are a host of other attack vectors at the atacker's disposal. The list below outlines some of these:

 

  • USB drive-by. An attacker drops a USB pen drive in an effective location and a member of staff picks it up and curiosity gets the better of them, leading to them plugging it in to a corporate machine. An effective location could be the car park, reception, the reception area toilet or a popular location nearby where staff like to meet at lunch or after work e.g. cafe, park or bar
  • Phishing email. A phishing email is one that tries to extract information from you. An example would be one that purports to be from your bank saying you need to login and confirm your details. You click on what looks like a legitimate link and are taken to what looks like your bank's login page. What in effect happens is you are directed to a clone of your bank's website that is controlled by the attackers who then get your legitimate details and can then use them to login to your real bank account. That would be a personal attack, but imagine how many vendors, suppliers and partners etc. that your company works for. You probably have logos of lots of them on your corporate website so its easy to find some of this information out
  • Phone calls. An attacker calls your Helpdesk claiming to be the CEO and asks for their password to be reset. Maybe you have a process in place for ensuring the request is legitimate, but what if the attacker starts using guilt and authority to pressure the Helpdesk advisor in to bypassing that process. Next thing, the CEO's email account has been breached and think of the treasure that most likely lies within. Maybe the real CEO made such a hurried request in the last few months and somebody got their fingers burnt for refusing to make the change

 

This limited list highlights a number of points, the primary one being that your people are the weakest link in your security chain, more often than not. Most people are aware of the types of attacks listed above, so training needs to be clever, not just a once a year exercise to tick a box, but ongoing and done in innovative ways to prevent message fatigue. The last point in particular highlights a big point which is, you need buy in from the top and all the way down. If your CEO needs a password reset in a hurry which breaks protocol, staff should be commended for not complying with that request, no matter how high up it comes from.

 

I'd love to know if you have any specific tales of the human link being leveraged in an attack.

The speed of technology change is increasing at an alarming rate. Startups can fund, build & deploy software in a shorter timeframes and consumers can install and adopt that quickly and easily. Gone are the days of years between major versions as software vendors roll out updates without relying on shipping out CDs.

 

So how do IT professionals keep up with it all? How do you find the time to do your job and upskill and understand the latest threat or IT trend being talked about in the news?

Of course, the Thwack community is a great place to discuss announcements and new trends amongst your peers. But how else do you keep pace with technology changes? Here are a few of my favourite tools and sources:

 

Cloud Business Blueprint podcast http://www.cloudbusinessblueprint.com/category/podcast/ - The latest announcements in Cloud computing and tips for running a successful IT services business.

 

Microsoft’s Channel 9 https://channel9.msdn.com/ - Microsoft event presentations including Build and Ignite. Don’t be put off by the msdn tag – there’s a ton on information on performance, security, migrations & upgrades too. They’ve also just released an iOS app http://t.co/VKZC6m1cMn

 

Microsoft Partner Network Blog http://blogs.msdn.com/b/auspartners/ - Subscribe via email so you don’t miss an announcement, but you can read & delete the ones that aren’t relevant to you.

 

Twitter heroes: Find the engaging, knowledgeable people in your field. My favorites include @Edbott @troyhunt @mkasanm @shanselman @WinObs @maryjofoley @cfiessinger @tomwarren

 

Pluralsight http://www.pluralsight.com/ - The content in the IT administrator track is outstanding & will leave you wondering when you're going to fit your real work in.

 

Pocket https://getpocket.com/ - I’ve heard people rave about this great ‘read later’ app. I have a great ‘read later’ folder in my Inbox but now it’s not so little. The app is on my to-do list to check out. Any fans of it out there?

 

OneNote https://www.onenote.com/ or EverNote https://evernote.com/ - Pick one of these ‘digital scrapbooks’ for saving handy URLs, webinar screenshots & course notes.

 

While it’s important to have an awareness of the changes in tech, you also need to have a filter. Watching the Microsoft Build keynote highlights, I got excited about Docker and the containerization of apps and I’m not a developer! But I can see the potential. I’ve also seen product announcements where I think “that’s great but I don’t have time for it right now”. Be aware of it, but don’t go deep diving into technologies that aren’t on your radar currently or in the near future or you’ll end up with Bright Shiny Object Syndrome.

 

Just as importantly though, how are you using the software and tools that you have in your hands right now? Have you taken the time to check out their latest features and apply then to how you work? How has the latest version of X made your more efficient/improved your day/helped you get a better result? Take time this week to investigate one software feature you didn’t know existed. Because if you’re still using Office like you did in 1997, what’s the point in upgrading?

 

Let me know your favorite blogs, podcasts and websites? How do you manage your information sources?

Leon Adato

Respect Your Elders

Posted by Leon Adato Jun 4, 2015

"Oh Geez," exclaimed the guy who sits 2 desks from me, "that thing is ancient! Why would they give him that?"

 

Taking the bait, I popped my head over the wall and asked "what is?"

 

He showed me a text message, sent to him from a buddyan engineer (EE, actually) who worked for an oil company. My co-worker’s iPhone 6 displayed an image of a laptop we could only describe as “vintage”:

(A Toshiba Tecra 510CDT, which was cutting edge…back in 1997.)

 

"Wow." I said. "Those were amazing. I worked on a ton of those. They were serious workhorsesyou could practically drop one from a 4 story building and it would still work. I wanted one like nobody's business, but I could never afford it."

 

"OK, back in the day I'm sure they were great," said my 20-something coworker dismissively. "But what the hell is he going to do with it NOW? Can it even run an OS anymore?"

 

I realized he was coming from a particular frame of reference that is common to all of us in I.T. Newer is better. Period. With few exceptions (COUGH-Windows M.E.-COUGH), the latest version of somethingbe it hardware or softwareis always a step up from what came before.

 

While true, it leads to a frame of mind that is patently un-true: a belief that what is old is also irrelevant. Especially for I.T. Professionals, it’s a dangerous line of thought that almost always leads to un-necessary mistakes and avoidable failures.

 

In fact, ask any I.T. pro who’s been at it for a decade, and you'll hear story after story:

  • When programmers used COBOL, back when dinosaurs roamed the earth, one of the fundamental techniques that were drilled into their heads was, "check your inputs." Thinking about the latest version of exploits, be they an SSLv3 thing like 'Poodle', or a SQL injection, or any of a plethora of web based security problems, the fundamental flaw is the server NOT checking its inputs, for sanity.
  • How about the OSI model? Yes, we all know its required knowledge for many certification exams (and at least one IT joke). But more importantly, it was (and still is) directly relevant to basic network troubleshooting.
  • Nobody needs to know CORBA database structure anymore, right? Except that a major monitoring tool was originally developed on CORBA and that foundation has stuck. Which is why, if you try to create a folder-inside-a-folder more than 3 times, the entire system corrupts. CORBA (one of the original object-oriented databases) could only handle 3 levels of object containership.
  • Powershell can be learned without understanding the Unix/Linux command line concepts. But, it’s sure EASIER to learn if you already know how to pipe ls into grep into awk into awk so that you get a list of just the files you want, sorted by date. That technique (among other Unix/Linux concepts) was one of the original goals of Powershell.
  • Older rev's of industrial motion-control systems used specific pin-outs on the serial port. The new USB-to-Serial cables don't mimic those pin-outs correctly, and trying to upload a program with the new cables will render the entire system useless.

 

And in fact, that's why my co-worker's buddy was handed one of those venerable Tecra laptops. It had a standard serial port and it was preloaded with the vendor's DOS-based ladder-logic programming utility. Nobody expected it to run Windows 10, but it fulfilled a role that modern hardware simply couldn't have done.

 

It's an interesting story, but you have to ask: aside from some interesting anecdotes and a few bizarre use-cases, does this have any relevance to our work day-today?

 

You bet.

 

We live in a world where servers, storage, and now the network is rushing toward a quantum singularity of virtualization.

 

And the “old-timers” in the mainframe team are laughing their butts off as they watch us run in circles, inventing new words to describe techniques they learned at the beginning of their career; making mistakes they solved decades ago; and (worst of all) dismissing everything they know as utterly irrelevant.

 

Think I’m exaggerating? SAN and NAS look suspiciously like DASD, just on faster hardware. Services like Azure and AWS, for all their glitz and automation, aren't as far from rented time on a mainframe as we'd like to imagine. And when my company replaces my laptop with a fancy "appliance" that connects to Citrix VDI session, it reminds me of nothing as much as the VAX terminals I supported back in the day.

 

My point isn't that I’m a techno-Ecclesiastes shouting "there is nothing new under the sun!" Or some I.T. hipster who was into the cloud before it was cool. My point is that it behooves us to remember that everything we do, and every technology we use, had its origins in something much older than 20 minutes ago.

 

If we take the time to understand that foundational technology we have the chance to avoid past mis-steps, leverage undocumented efficiencies built into the core of the tools, and build on ideas elegant enough to have withstood the test of time.


Got your own "vintage" story, or long-ago-learned tidbit that is still true today? Share it in the comments!

Over the last few months, I've spent a lot of time talking to different people about the importance of the "B" in BI--as in "Business Intelligence" (stay with me, DBAs, I'll get you roped in here soon enough). I joke that that "B" is there for a reason and it's no accident that it's the first letter in the acronym. The point I make is that the hard part of BI projects isn't deciding what technology to use or working through hurdles that come from questionable data, but instead, understanding the Business needs that got the project started in the first place.

 

Or, put another way, making sure that the "Why?" questions are asked first and foremost, and that the answers to those questions are used to ask better "What?" questions on down the line while the solution is designed, developed, and implemented.

 

This works well when the word "Business" is directly included in the type of data work I do, but this applies just as readily to plain ol' DBA work, as well. Everything from data modeling considerations to planning big database consolidation projects need to start out by asking and understanding the "Why"s coming from our business leaders and our userbase. Sometimes these questions are easy--"Why do you want to be able to restore yesterday's database backup?" Other times they might be hard to get answers to--"Why is this KPI calculated three different ways in this one report?" (true story!). Chances are, asking these questions will have some kind of monetary impact. For a BI project, it may cause the project to complete quicker with less re-work late in the game, while the need to upgrade some old, unreliable hardware may lead to a server virtualization project for a DBA team.

 

I've seen both BI and DBA projects go sideways because the right questions weren't asked early enough in the project lifecycle; conversely, I've worked with some great BAs who are fantastic at understanding the business rationale and ensuring we, the technology team, all did, too.

 

What kind of save-the-day stories do you have that started out by asking "Why?"

jtroyer

Are You Pi Shaped?

Posted by jtroyer Jun 2, 2015

We’re well into 2015. How is your year going - and how are you preparing for the rest of your career?  Earlier this year, four of us got together and had a great talk about technology predictions for 2015.

 

One of the things I talked about was becoming a “Pi-shaped expert.” You might ask, “What’s a pi-shaped expert?” Well, it’s all about how you develop your skills and your career. If you read advice on skills and jobs, you will often see advice on being a “T-shaped expert.”

 

One of the things we struggle with in IT is in being a generalist vs. being a specialist. There are lots of ways of looking at this: Kong Yang has written on Geek Speak about Generalists, Versatilists, and Specialists, which is a different window into this same issue. But sticking with our letter-shaped model, let's define the T-shaped expert. This kind of person has a broad range of skills which they’re reasonably familiar with; that’s the top of the “T”. Maybe for you it's Windows, networking, security, being really good at writing requirements documents, scripting, and Exchange. And then the T-shaped expert also has the stem of the “T”—the one skill area in which they're an expert. Maybe for you it's VMware and virtualization).

 

The T-shape is what gets you the good job. Generalists are where we all start, but generalists in IT tend to get stuck in smaller shops doing everything and are often underpaid and overworked. More advanced jobs and bigger environments require—and pay for—experts in a given area, but who can pinch hit in other roles.

 

So being a T-shaped expert is the best way to get a good job, move to a better situation, and have a good career. At least for a while. If you became the SAN expert or the VMware expert ten years ago, you have done well in the ensuing decade. But if you’re *still* the SAN expert or the VMware expert ten years later, and that’s all you are bringing to the table, you should be a little concerned. The SAN of 2015 is a lot easier to manage, and in many emerging environments in the cloud or with hyperconverged infrastructure, there’s not even a SAN to manage. Your T-shaped skill simply isn’t as valuable.

PI Shaped Expert.png

This is where the Pi-shape comes in. While you are still working in that T-shaped job with your T-shaped pay, you need to be building new interests and new skills. You should be using small projects at work as well as side projects outside work to develop another leg on your “T,” making it a “Pi.”

 

That new skill in 2015 might be something like automation and orchestration, configuration management, DevOps, or containers. Or perhaps it’s expertise in hyperconverged infrastructure, software-defined networking, or public cloud. Or it could be going deeper into application performance management and relating back to DevOps and Continuous Integration. It should be something that you find interesting and something that is becoming increasingly relevant now and growing in the future. As you develop this second leg of your “Pi,” you open up new opportunities in the future.

 

IT is about continuous learning. Don’t get stuck being a generalist. But also don’t get stuck with one deep skill that’s stuck in past. You probably have decades more left in your career. Always be stepping to a new stone in the river by becoming a Pi-shaped expert.

Vegaskid

Defence in depth

Posted by Vegaskid Jun 1, 2015

Ever since I started working in IT and took an interest in the Information Security aspect, I have heard the term 'defence in depth' being bandied around, qualified to varying degrees. In short, defence in depth is an approach where you have different security controls at different places in your overall system. It is also referred to as the castle approach, harking back to days of yore. In those days, having tall and thick walls was not always enough. You wanted to ensure there was only one entrance in your castle (ignoring the fact you might have several back doors for quick escapes!), a moat to protect your perimeter and a draw bridge to only allow authorised persons to come across. Once in, they would often need to relinquish their weapons and the upper class people would often live in the middle of the grounds for further protection.

 

 

In the world of IT, the same approach is realised through the following, non-exhaustive list:

 

 

  • Firewalls. This is the drawbridge. Only allow traffic from the right sources, going to the right destinations. Everything else gets left outside the gate
  • IPS/IDS. You almost certainly want the Internet to get to your public web server, but if somebody out there is trying to attack a weakness in your application, a basic L3/L4 firewall won't cut the mustard. You need something that can look in to the application traffic and determine if something untoward is happening. It can either drop the traffic, slow it down, send you an alert or even launch a counter attack depending on your setup
  • Patching. Vulnerabilities in your operating system, middle-ware and applications can often be mitigated against by a security device such as an IPS, but what if the attacker is already in your network, behind this layer of protection? It is extremely important that you keep all your systems patched and have a rock solid patching policy that is adhered to. This not only refers to servers, but network devices, storage and any other device that your IT relies on
  • Physical. Where is your data physically kept? On a machine under your desk? In a comms room somewhere at your head office? Or maybe in a tier 3 data centre? You could have the best of breed security devices at every level in your network but if you leave important print outs on your desk or like taking home some key files for the board on an unencrypted USB key, you are negating all the protection that they offer
  • Policies. It's all very well talking the talk, but if you don't have all of these steps and processes documented somewhere, people won't even remember that there is a way they are supposed to be doing something or you'll end up with 10 engineers doing something in 10 different ways in a vague attempt to comply. Get buy in from senior management and create a culture of security that people will not try to circumvent. Which leads to my next point...
  • Training. InfoSec training can traditionally be very dry and usually comes from a "let's plough through this stuff for another year" angle. That is because the people doing the training are often from an InfoSec compliance background rather than Security Operations and its a box ticking exercise, rather than an attempt to really engage people to be thinking about InfoSec all year round

 

 

The list above is brief and incomplete, but you can see that even in that list, there is a broad range of areas that need addressing to really give good protection.

 

 

My question to you now is, how good is your approach to information security? Have you worked at companies that have ignored most of these well known approaches? Have others been a shining beacon of how to protect your treasured resources? I look forward to hearing your thoughts and experiences.

mindthevirt

The Trouble Of VDI

Posted by mindthevirt May 29, 2015

In my last post, How To Right-Size Your VDI Environment, I gave some insight in how you can right-size your VDI environment and how to validate your VDI master template.


As VDI is gaining a bigger market share and SAN/NAS prices are dropping every year, there are also some problems on the horizon.

The benefits are often obvious and include, but are not limited to:

 

  • Central management for your entire desktop infrastructure, whether your end-users are located in the U.S. or in Europe.
  • Back and recover capabilities are endless since you can use build-in snapshots to take quick backups of your desktops and restore them within minutes and not hours or days.
  • Save money by reducing your carbon footprint and cut your costs for expensive workstations.
  • Easy and inexpensive way to migrate OS’s. You can easily deploy new desktops for your users with new Operating Systems.

 

With all the benefits of VDI, there are also some problems associated and you should be aware of them:

 

  • Not every Antivirus software is optimized for VDI desktops and often Antivirus scans & updates can cause severe performance impacts.
  • Windows license management becomes a nightmare, especially if you dynamically provision desktops.
  • Troubleshooting becomes more difficult since several components are involved which means you’ll often coordinate with other teams e.g. network admin, storage admin, VMware admin.
  • Not every storage solution is optimized or recommendable for VDI use cases. Just because you have a SAN sitting in your basement, doesn’t mean it will be a great solution for your VDI environment.  You should count ~10IOPS per active user.
  • Not all applications can be virtualized

 

This comprehensive list of benefits and disadvantages when choosing VDI, hopefully helps to you make a better decision as you and your company consider investing and expanding their VDI environment. If you have any questions regarding VDI benchmarking, things to consider or best practice, please post them in the comments below and I will make sure to address them quickly.

As of this past week it is the official 'beginning' of Summer here in North America.  That's pretty exciting, that means more people will be out of school, work, potentially going on vacation which are all very good, and of the same notion can be very bad.   What this means for you and me is;

 

Fewer people are watching the hen house, those few may be pent up and wanting to be outside! Yet of the same token, you have bored hackers who are looking to compromise our networks!

Okay, maybe this is true, and maybe it isn't, but here is one thing that is true!

 

... When was the last time you did some Spring Cleaning of your rulesets?  Gone through and validated that your Syslog, NTP, who knows policies are set correctly, communicating to the right places, so on and so forth?! Do we have scripts which help automate this?!

 

 

Share and share alike!

 

There is no better time than the present to go through and cleanup your rules but we shouldn't have to operate in a vacuum, we are community!

 

A few years ago, I wrote a few blogposts on VMware PowerCLI one-liners because they'd be useful

PowerCLI One-Liners to make your VMware environment rock out!

Using PowerCLI to dump your permission structure in vCenter

 

The one-liners are in both the post itself and then even more in the comments (which I had updated after the fact).   Not all of them are about security related matters, there's insight into setting your Syslog, NTP, and so much more.  But you guys I'm sure have something you find very useful, that you either wrote a script for once, or regularly run to make sure your environment hasn't changed.

 

If you have any of these to share, I bet I know some fellow Thwack users who would be ecstatic to use them!

So let's get into the Spring Cleaning spirit through the power of automation, or as I like to say, "Let's not do things more than once, for the first time!" Okay, I'm not sure I like to say it in that particular context... But I digress significantly.

 

If you have any particular scripts, things to check, one-liners or powerful life changing experiences to share, we all look forward to it!

Happy Summer everyone! Let's make it an awesome one!

Capturing lightning in a bottle seems apropos for achieving disruptive innovation. In fact, disrupting IT is a small price to pay if the business can make it happen. “It” doesn’t happen often, but when it does, new industries are created and new markets emerge to eclipse the old ones. Realistically, the value to the business shows up as more efficient IT ops, lower IT OPEX, and shorter app development cycles.

 

In part one of this series, I covered aspects of old & new IT management. Interestingly, new and old IT management are intertwined in their generational relationship. Old methods shape policies that construct the workflows that new methods use to manage the data center environments. But what really matters to the IT pro? At the end of the day, the only thing that matters is the application.


So understanding the application stack in its entirety, with context around all of its connection with a single point of truth is paramount to successful IT management. Ideally, this single point of truth is where old and new IT management converges to.


The right IT management mix is a function of organizations aligning business operations with IT management to maximize efficiency and effectiveness of application delivery, while minimizing disruption to IT and downstream to the business. Unfortunately, politics and inertia have to be factored in and they tend to have critical mass in organization silos.


Is your organization converging to a single point of truth for your applications?

Unlike most application support professionals, or even system administrators, as database professionals, you have the ability to look under the hood of nearly every application that you support. I know in my fifteen plus years of being a DBA, I have seen it all. I’ve seen bad practices, best practices, and worked with vendors who didn’t care that what the were doing was wrong, and others with whom I worked closely to improve the performance of their systems.

 

One of my favorite stories was an environmental monitoring application—I was working for a pharmaceutical company, and this was the first new system I helped implement there. The system was up for a week and performance had slowed to a crawl. After running some traces, I confirmed that there was a query without a where clause that was scanning 50,000 rows several times a minute. Mind you, this was many years ago, when my server had 1 GB of RAM—so this was a very expensive operation. The vendor was open to working together, and I helped them design a where clause, an indexing strategy, and a parameter change to better support the use of the index. We were able to quickly implement a patch and get the system moving quickly.

 

Microsoft takes a lot of grief from DBAs for their production systems like SharePoint and Dynamics, and some interesting design decisions that are made within. I don’t disagree—there are some really bad database designs. However, I’d like to give credit to whomever designed System Center Configuration Manager (SCCM)—this database has a very logical data model (it uses integer keys—what a concept!), and I was able to build a reporting system against it.

 

So what horror stories do you have about vendor databases? Or positives?

The terms EMS, NMS and OSS are often misunderstood and used interchangeably. This can, sometimes, lead to a confusion on which system can do what functions. Therefore I am attempting to clarify these terms in a simple way. This may help you in making informed decisions when procuring management systems.

 

But before understanding the terms for management systems, one should understand what FCAPS is, in relation to management systems. In fact every management system should perform FCAPS. The five alphabets in FCAPS stand for the following:

 

F- Fault Management – i.e.  Reading and reporting of faults in a network; for example link failure or node failure.

 

C-Configuration Management- Relates to loading/changing configuration on network elements and configuring services in network.

 

A-Account Management- Relates to collection of usage statistics for the purpose of billing.

 

P- Performance Management- Relates to reading performance related statistics, for example reading utilization, error rates, packet loss, and latency.

 

S-Security Management- Relates to controlling access to assets of network. This includes authentication, encryption and password management.

 

Ideally, any management system should do all of the FCAPS functions described above. However, some commercial solutions allow only some of the FCAPS functions. In that case, there will be a need for additional management system to do the rest of the FCAPS functions. The FCAPS applies to all types of management systems including EMS, NMS and OSS.

 

Now that we covered the general functions of management systems, let’s understand the terms, EMS, NMS and OSS.

 

EMS stands for “Element Management System”. It is also called Element manager. EMS can manage (i.e. FCAPS) a single node/element or a group of similar nodes. For example it can configure, read alarms etc. on a particular node or group of nodes.

 

NMS (Network Management System) on the other hand manages a complete network i.e. it covers all the functions of EMS as well as does FCAPS with relation to the communication between different devices.

 

So the difference between EMS and NMS is that NMS can understand the inter-relationship between individual devices, which EMS cannot. Although EMS can manage a group of devices of the same type but it treats all the devices in a group as single devices and does not recognize how individual devices interact with one another.

 

So to sum up:

  1. I.e. NMS = EMS + link/connectivity management of all devices+ FCAPS on network basis.

 

NMS can manage different types of network elements/technologies of a same vendor.

 

An example would clarify. An EMS would be able to give individual alarms on nodes. But NMS can correlate the alarms on different nodes; it can, thus find out root cause alarms when a service is disrupted. It can do so because it has network wide view and intelligence.

 

OSS (Operation Support Systems) takes a step further. It can not only manage a single vendor but can also manage multiple vendors. OSS will be needed in addition to vendor specific NMS. OSS will interact with individual NMSs and provide one dashboard for FCAPS management.

 

OSS, thus, can give a single view of the network end to end including all vendors. An example would be a service provisioning tool that can create an end to end service between Cisco and Juniper routers. This would need an OSS that can talk to the NMS of the both vendors for this purpose or can even configure the network elements directly.

 

After explaining the terms EMS, NMS and OSS, I would end up my blog by asking

 

  • Does your management system do all the FCAPS functions or some?

 

  • Do you prefer to have one network management system that does all FCAPS or different ones depending on different specialized functions?

 

Or may be I should ask, are you using any management system at all

 

Would love to hear your opinion!

Email has become the core of every organization. It is the primary source for communication that everybody turns to, regardless of the type of email server they are running. Who doesn’t have email??? When email is down, communication is down, resulting in lost productivity and potentially thousands to millions of lost dollars.

 

Microsoft Exchange Server is the most widely used on-premises enterprise email system today. When it works…it works great. But, when Exchange is not working correctly, it can be a nightmare to deal with.

 

For an Exchange administrator, dealing with Exchange issues can be challenging because there are so many variables when it comes to email. Having a user complain about email being “slow” can be the result of many different factors. It could be a network issue, a desktop client issue, or even a poorly performing Exchange Server. So many possibilities of what is wrong and 1 unhappy user. 

 

Not only do issues arise in everyday working situations, but if you are preparing for a migration or an Exchange upgrade there are always “gotchas.” These are things that get overlooked until something breaks and then everybody is scrambling around to fix it and do damage control later. These are probably the most annoying to me because often times somebody else has run into the problem first. So, if I had known about the gotchas I could’ve been prepared.

 

I recently had the opportunity to present a webinar, “The Top Troubleshooting Issues Exchange Admins Face and How to Tackle Them.One of the great things about the IT community is that it’s there for us to share our knowledge, so others can learn from our experiences. That’s what I hoped to share in this webinarsome tips on solving annoying problems, as well as providing some true tried lessons from managing Exchange myself. We discussed some of the challenges with Exchange migrations, mailbox management issues (client issues), and even discussed Office 365. You can view a recording of the Webinar here.

 

Since our time was limited, I could not answer all the questions that were asked during the webinar, so I wanted to take an opportunity to answer some of them here:

 

1. Is there a size limit on outlook 2010/exchange 2010? We had a laptop user with a 20GB mailbox with cache mode enabled who had an issue with his offline address book dying within a day of his cache being resynced - we saw it as an issue with him trying to view other users calendars.

 

This type of issue can be many things and could be a whole blog in itself so I will keep it short. Large mailboxes will create large OST files locally on the machine they are using and can become corrupt. If that is the case, then creating a new OST file may resolve your issue. When trying to view others calendars, you can try removing the calendar and re-adding the shared calendar again. Also double check calendar permissions!

 

2. Do you know the issue when installing EX2010 SP3 it fails at 'removing exchange files'? SP3 is needed for upgrading to EX2013.

 

There is a known issue for Exchange failing to remove setup files with SP3 when PowerShell script execution is defined in Group Policy. For more details on this issue use the Microsoft KB# 2810617 site.


3. Any other resources we should review for Exchange Best Practices &/or Monitoring Tips (Other than Thwack?)

 

The Microsoft TechNet site has Exchange Best Practices, as well as Monitoring tips that be helpful. There are also various Microsoft MVP sites that can be helpful as well, such as:

 

http://exchangeserverpro.com/

http://www.stevieg.org/

http://www.expta.com/

 

 

4. Any advice for having Exchange 2003 and Exchange 2010 coexist until migration is completed?

 

Coexistence periods can be a challenge to manage and is best if kept to a short period if possible. Microsoft provides some checklist and documentation that can help with coexistence which can be found here on their TechNet site.

 

5. Is it possible to add more than 10 shared mailboxes to outlook 2010 client?

 

Yes, it is possible to have more than 10 shared mailboxes. By default, Outlook 2010 & Outlook 2013 has a default limit of 10 mailboxes with a maximum supported of up to 9999 accounts. To configure outlook for more than the default limits, you will need to edit registry settings or apply a Group Policy.

 

6. Is there a way can we enable "On behalf of" for the shared mailbox, so the end user who receives the email knows who sends the email?

 

To enable send on behalf for a shared mailbox, you can configure delegates for the shared mailbox. You can also apply the send on behalf of setting under the mail flow settings in the mailbox account properties.

 

 

I had a great time participating in the webinar with Leon and hope our viewers were able to take some tips back with them to help within their Exchange environment. Exchange is the core of most businesses and Managing an Exchange environment can be challenging, I know that from personal experience. However, given the right tools by your side you can tame the beast and keep the nightmare events to a minimum.

 

Watching David Letterman sign off was a reminder that old shows are still great, but they are often replaced by new shows that resonate better with current times. It’s a vicious cycle driven by the nature of business. The same can be said of IT management with its constant struggle of old vs new management methods.

 

The old IT management principles rely on tried-and-true and trust-but-verify mantras. If it isn’t broke, don’t go breaking it. These processes are built on experience and born off IT feats of strengths. Old IT management collects, alerts, and visualizes the data streams. The decision-making and actions taken rest in the hands of IT pros and trust is earned over a long period of time.


Outdated is how new IT management characterizes the old management ways. Too slow, too restrictive, and too dumb for the onset of new technologies and processes. New IT management is all about the policies and analytics engine that remove the middle layer—IT pros. Decisions are automatically made with the analytics engine while remediation actions leverage automation and orchestration workflows. Ideally, these engines learn over time as well so they become self-sustaining and self-optimizing.


The driver for the new management techniques lie in the business needs. The business needs agility, availability, and scalability for their applications. Whether they are developing the application or consuming the application, the business wants these features to drive and deliver differentiated value. So applications are fundamental to the business initiatives and bottom line.


Where does your organization sit on the IT management curve – more old, less new or less old, more new or balanced? Stay tuned for Part 2 of 2015 IT Management Realities.

Filter Blog

By date:
By tag: