Skip navigation

Hello Geek Speakers,


As 2015 comes to an end and 2016 begins, SolarWinds once again tapped its band of experts - the Head Geeks - to take a look inside their crystal balls and provide a glimpse into IT trends to watch for in the coming year. Will their predictions come to fruition in 2016? Let us know what you think and don't forget to revisit last years Prodictions to see where the Head Geeks were right or wrong.


Be sure to @mention each Geek to continue the conversation with them about their predictions. Here are each of their thwack handles:


Kong Yang - kong.yang


Patrick Hubbard - patrick.hubbard


Leon Adato - adatole


Thomas LaRock - sqlrockstar


As we did last year, we will continue to revisit these predictions and see if they are becoming a reality, or if they were entirely incorrect. However, that will likely NOT happen!






We hope you all have a Happy New Year - Enjoy!


 

After taking a look at what it means to monitor the stability of Microsoft Exchange, and choosing a product option that won’t keep your organizational staff busy for months configuring it we will now look at what it means to monitor Exchange Online in the Office 365 product platform.   Yes, you did read that correctly, Exchange Online.  Isn’t Microsoft monitoring Exchange Online for me? Well yes, there is some level of monitoring, but we as customers typically do not get frontline insight into the aspects of the product that are not working until something breaks.  So, let’s dive into this a little bit further.


Exchange Online Monitoring


If your organization has chosen Exchange Online your product SLA’s will generally be around 99.9x%.  The uptime varies from month to month, but historically they are right on track with their marketed SLA or they will slightly exceed it.  As a customer of this product your organization is still utilizing the core Exchange features such as a Database Availability Group (DAG) for your databases, Outlook Web App, Azure Active Directory, Hub Transport servers, CAS servers etc, but the only difference is that Microsoft maintains all of this for your company.  So assuming that Office 365/Exchange Online meets the needs of your organization this is great, but what happens when something is down? 99.9x% is good, but it’s not 99.999%, so there are guaranteed to be some occurrences of downtime.


Do I really need monitoring?


Not convinced monitoring is required?  If your organization has chosen to move to the Exchange Online platform; being able to understand what is and isn’t working in the environment can be very valuable.  As an Exchange Administrator within the Exchange Online platform, if something isn’t working I can guarantee that leadership is still going to look to me to understand why even if the product is not maintained onsite.  Having a quick and simple way to see that everything is functioning properly (or not) through a monitoring tool can allow you to quickly provide your leaders the feedback they need to properly communicate to the business what is happening.  Even if the only thing I can do next is contact Microsoft to report the issue.


Corrective Measures


Choose a monitoring tool for Exchange Online that will provide valuable insights into the your email functionality.  My guidance here would be relatively similar to suggestions that I would make for Exchange On-Premises.


  • Evaluate several tools that offer Exchange Online monitoring, and then decide which one will best suit your organizations requirements.
  • Implementation of monitoring should be project with a dedicated resource.
  • The tool should be simple and not time consuming to configure. 
  • Choose a tool that monitors Azure Connectivity too. Exchange Online depends heavily on Azure Active Directory and DNS, so being aware of the health of your organizations connectively to the cloud is important.
  • Make sure you can easily monitor your primary email functionality.  This can include email flow testing, Outlook Web App, Directory synchronization, ActiveSync, and more.
  • Ensure that the tool selected has robust reporting. This will allow for time saving’s from scripting your own reports, and allow for better historical trending of email information.  These reports should include things such as mail flow SLA’s, large mailboxes, abandoned mailboxes, top mail senders, public folder data, distribution lists and more.


These considerations will help your determine which product solution is best for your organizational needs.


Concluding Thoughts


Monitoring the health of your organizations connectivity to the cloud is valuable to providing insight into your email system.  There are options that with provide you and your organizational leadership instant insight into the health ensuring that there is an understanding of overall system health, performance and uptime.

Now that I’m finally recovered from Microsoft Convergence in Barcelona, I’ve had a chance to compare my expectations going into the event with the actual experience of attending.  And as always for SolarWinds staff especially as Head Geek, that experience is all about speaking with customers. What was different about Convergence, aside from Barcelona always being it’s wonderful self, is that the mix of conversations tended more toward IT managers and less with hands-on admins.  And lately I’m finding more and more managers who actually understand the importance of taking a disciplined approach to monitoring.

 

IMG_1248.png

MS Convergence is focused on MS Dynamics, specifically Dynamics AX CRM. Generally when you look at the front page for a software product and the first button is “Find a partner” rather than “Live Demo”, or “Download”, it lets you know it’s a complex platform.  But in the case of Dynamics AX, there’s a reason- a surprising number of integrations for the platform.  There’s no way Microsoft can be expert in SAP, TPF, Informix, Salesforce and 496 other large platforms, so they rely on partners. For us in the SolarWinds booth of course, it meant we had lots of familiar conversations about complex infrastructure and the challenges of managing everything from networking to apps to storage and more.

 

One thing was clear, at least with the European audience- cloud for Dynamics customers seems to be driving more discipline not less.  As admins cloud too often means shadow IT- even more junk that we need to monitor but with less control.  With Dynamics the challenge isn’t remaining calm when your NetFlow reports show dozens of standalone Azure or AWS instances, it’s the reverse.

 

With a single Azure service endpoint for the platform, firewall and router traffic analysis uncovers dozens of niche domain publishers across the organization, each pushing critical business data to Dynamics in the cloud.  Some integrations are well behaved, running by ops and monitored, but others are hiding under the desk of a particular clever analyst.  These developers have credentials to extend the CRM data picture, but no budget to assure operations.  Managers I spoke with were of course nervous about that, plus regulatory compliance and of course evolving EU data privacy laws- not a trivial CRM endeavor.  (Makes me actually grateful for U.S. PCI, HIPA, SOX and GLBA, which though headaches are reasonably stable.)

 

It was interesting to look at the expo hall, filled with dozens of booths of boutique partners, each specializing in a particular CRM integration nuance and to remember the fundamental concerns of everyone in IT.  We’re on the hook for availability, security, cost management and end user quality of experience and we share the same challenges.  Establishing and maintaining broad insight into all elements of production infrastructure is not a first step, and certainly mustn’t be the last step.  Monitoring is a critical service admins provide to management and teammates that makes everything else work.  In hybrid IT, with on premises, cloud, SaaS and everything else in between, there are more not fewer dependences, more to configure and more to break.  It always feels good to be needed just as much by the Big Systems owners and IT managers as it is to be the favorite tool of admins working helpdesk tickets.

 

Of course it was also nice to see that Microsoft finally streamlined the doc for Surface Pro 4.

I've had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company's journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.


Blog Series

One Company's Journey Out of Darkness, Part I: What Tools Do We Have?

One Company's Journey Out of Darkness, Part II: What Tools Should We Have?

One Company's Journey Out of Darkness, Part III: Justification of the Tools

One Company's Journey Out of Darkness, Part IV: Who Should Use the Tools?

One Company's Journey Out of Darkness, Part V: Seeing the Light

One Company's Journey Out of Darkness, Part VI: Looking Forward



If you'e followed the series this far, you've seen a progression through a series of tools being rolled out. My hope is that this last post in the series spawns some discussion around tools that are needed in the market and features or functionality that is needed. these are the top three things that we are looking at next.

 

Event Correlation

The organization acquired Splunk to correlate events happening at machine level throughout the organization, but this is far from fully implemented and will likely be the next big focus. The goal is to integrate everything from clients to manufacturing equipment to networking to find information that will help the business run better and experience fewer outages and/or issues as well as increase security. Machine data is being collected to learn about errors in the manufacturing process as early as possible. This error detection allows for on the fly identification of faulty machinery and enables quicker response time. This decreases the amount of bad product and waste as a result, improving overall profitability. I still believe there is much more to be gained here in terms of user experience, proactive notifications, etc.


Software Defined X

Looking to continue move into the software defined world for networking, compute, storage, etc. These offerings vary greatly and the decision to go down a specific path shouldn't be taken lightly by an organization. In our case here we are looking to simplify network management across a very large organization and do so in such a way that we are enabling not only IT work flows, but for other business units as well. This will likely be OpenFlow based and start with the R&D use cases. Organizationally IT has now set standards in place that all future equipment must support OpenFlow as part of the SDN readiness initiative.

Software defined storage is another area of interest as it reduces the dependency on any one particular hardware type and allows for ease of provisioning anywhere. The ideal use case again is for R&D teams as they develop new product. Products that will likely lead here are those that are pure software and open, evaluation has not really begun in this area yet.


DevOps on Demand

IT getting a handle on the infrastructure needed to support R&D teams was only the beginning of the desired end state. One of the loftiest goals is to create an on-demand lab environment that provides compute, store and network on demand in a secure fashion as well as provide intelligent request monitoring and departmental bill back. We've been looking into Puppet Labs, Chef, and others but do not have a firm answer here yet. This is a relatively new space for me personally and I would be very interested in further discussion around how people have been successful in this space.

 

Thank you all for your participation throughout this blog series.  Your input is what makes this valuable to me and increases learning opportunities for anyone reading.


Recently we covered what it means to configure server monitoring correctly, and the steps we can take to ensure that the information we get alerted on is useful and meaningful.  We learned that improper configuration leads to support teams that ignore their alerts, and system monitoring becomes noise. Application monitoring isn’t any different, and what your organization sets up for these needs is likely to be completely different than what was done for your server monitoring.  During this article we will focus on monitoring Microsoft Exchange on-premises, and what should be considered when choosing and configuring a monitoring tool to ensure that your organizational email is functioning smoothly to support your business.


Email Monitoring Gone Wrong


In the early days of server monitoring it wasn’t unusual for system administrators to spend months configuring their server monitoring tool for their applications.  With some applications, this decision may be completely appropriate, but with Microsoft Exchange I have found that server monitoring tools typically are not enough to successfully monitor your organizational email system. Even if the server monitoring tool comes with a “package” that is specifically for monitoring email.  The issue becomes that by default these tools with either alert on tool much or too little never giving your application owner exactly what they need.


Corrective Measures


So how can your business ensure that email monitoring is setup correctly, and that the information received from that tool is useful?  Well it really comes down to several simple things.

  • Evaluate several Exchange monitoring tools, and then choose a tool that will best suit your Exchange needs.  In most cases this tool is not the same as your server monitoring tool.
  • Implementation of Exchange monitoring should be project with a dedicated resource.
  • The tool should be simple and not time consuming to configure. It should NOT take 6 months to be able to properly monitoring your email system.
  • Choose a tool that monitors Active Directory too.  Exchange depends heavily on Active Directory and DNS, so Active Directory health is also vital.
  • Make sure you can easily monitor your primary email functionality. This includes email flow testing, your Exchange DAG, DAG witness, Exchange databases, ActiveSync, Exchange Web Services, and any additional email functionality that is important to your organization.
  • Ensure that the tool selected has robust reporting.  This will allow for time saving’s from scripting your own reports and allow for better historical trending of email information. These reports should include things such as mail flow SLA’s, large mailboxes, abandoned mailboxes, top mail senders, public folder data, distribution lists and more.

This approach will ensure that your email system will remain functional, and alert you before a real issue occurs.  Not after the system has gone down.


Concluding Thoughts


Implementing the correct tool set for Microsoft Exchange monitoring is vital to ensuring the functionality and stability of email for your business.  This is often not the same tool used for server monitoring, and should include robust reporting options to ensure your SLA’s are being met and that email remains functional for your business purposes.

Helllllllo Geek Speak Friends,

 

It is my pleasure to inform our awesome community that Geek Speak has been nominated for the 2015 Bytes that Rock Awards in the Best Software Blog Category. That said, I want to thank all of our amazing contributors and thwack community. All of you provide amazing content from blogs to in depth discussion. The Head Geeks and Ambassadors work very hard to crank out the content you enjoy so much.


Kong Yang - @kong.yang

Patrick Hubbard - @patrick.hubbard

Leon Adato - @adatole

Thomas LaRock - @sqlrockstar


Show them some love and Vote today!


The winner will be announced December 10th...so stay tuned.

 

 

geekspeak_vote.jpg

Yet another tip from your friendly neighborhood dev.

By Corey Adler (ironman84), professional software developer

(With many thanks to my boss, John Ours.)

 

I knew it would eventually come to this. I knew I would inevitably have to deal with this issue, but I was hoping to put it off a bit longer. The thing is, I painted myself into a corner with my last post.

 

Well, here goes:

 

Always, always, FRAKKING ALWAYS declare your variables.

 

I don’t care if you’re using a language that allows for dynamic typing.

 

DECLARE. YOUR. VARIABLES.

 

Why am I so passionate about this? It probably has to do with the fact that I’ve had to take a look and fix code in script files where variables were not declared. Those were some of the most difficult tasks in my career, simply because it made it harder for me to keep track of everything going on. It might be easy and simple for you to describe what each of your variables is doing, but what about the next person that’s going to take a look at it? As a programmer you always have to think not just about what you’re doing right now, but what might happen days, months, or even years from now.

 

So what about someone, like yourself, who is not a professional programmer, who is just playing around with the code? Wouldn’t that argument work in your favor (since no one else will be working on it)? NOPE! Because that next person could easily be you, years later, looking back over at some piece of code. Sadly, you can’t even make the assumption that you, who wrote the code, will still have some idea what that code does when you go back to look at it.

xkcd_comments.png

 

The best thing for you to do is to assume that the next person to look at it won’t have any idea about what’s going on and needs you to help explain things. The reason you should declare the variables is to help with the self-documentation of your code for future use. It’s a great practice to get into, and no professional experience is needed.

 

That’s as far as styling goes, though. The primary reason to always declare your variables (as well as why you shouldn’t just declare them at the top of your file), is because of a concept called implicit scope. Implicit scope is the idea that one should only declare their variables as they happen to need them. The benefits of this approach are twofold. First, it reduces the number of variables in your program that only appear in certain contained blocks of code. For example, let’s say that you have a variable that you only use inside of a for-loop. Instead of having that variable taking up space (both in your file and in memory) for the whole length of the program, you have that variable declared and used only in that for-loop. That way no one looking at the code needs to worry about other locations that use that variable since it’s clearly contained within a specific block of code. Second, it makes it easier to debug your code should the need arise. When you see a variable declared right before its first use in your program, you know that it hasn’t been used previously.  If you haven’t declared your variables, though, or you’ve declared them at the top of the file, someone (including you) who goes to look at the code later will need to check to see if that variable is being used anywhere else prior to that location, which can waste a lot of time and be a drain on one’s memory.

 

A third reason to declare your variables involves something in programming called coercion. Coercion is defined as automatically forcing one of the operands in a function to that of another type at runtime. Here’s an example of implicit type coercion, using JavaScript:

[ ] == 0; // result is true
0 == "0"; // result is true

So why, exactly, is coercion a bad thing? First of all, it requires each variable that you create to take a larger chunk out of memory than would normally be required. In statically typed languages, like C#, each different variable type is allocated a certain, fixed amount of memory, which would correspond to the size of the maximum values for those types. With coercion, all variables are treated the same, and are allocated the same amount of memory, sometimes way more than is necessary. All of this can add up in your application, potentially causing it to run slower.

 

A fourth, and probably more understandable reason for people not as well versed in coding as you, are the insanely weird behaviors that can occur by using coercion. Take this extreme example that I found at http://g00glen00b.be/javascript-coercion/:  Insert the following into the address bar of your Web browser (and replace the “javascript:” at the front if you have a browser that automatically removes it when copy-pasted):

javascript:alert(([]+[][[]])[!![]+!![]]+([]+[][[]])[!![]+!![]+!![]+!![]+!![]]+(typeof (![]+![]))[!![]+!![]]+([]+[][[]])[!![]+!![]+!![]+!![]+!![]]+([]+!![])[![]+![]]+([]+!![])[![]+!![]]+([]+[][[]])[!![]+!![]+!![]+!![]+!![]])

 

The result of this is an alert box with the name of the person who wrote the post at the aforementioned link. All of this is only possible through coercion. Let’s take another, simpler example of the weird behaviors that can occur when you allow coercion to happen, found at http://whydoesitsuck.com/why-does-javascript-suck/ :

var a = "1"

var b = 2

var c = a + b

 

In a case like this, what would you expect the result of c to be? If you think it should be 3, then you’re a victim of coercion. The same goes if you think it will be “3” (as in the string value). In actuality, the value populated in c is 12. Adding a + sign to a (as in “+a + b”) will give you back 3 (as a number; not a string). Or how about the following example:

16 == [16]       

16 == [1,6]      

"1,6" == [1,6]  

 

So what do you think? The first one could be true, because they both contain 16, but it could be false since one is an integer and the other is an array. The second one doesn’t look right, but maybe that’s how an array of integers is expressed when printing them on the screen. Or the third one, which also doesn’t look right, but could be right for the same reason the second could be. The truth, in order, is true, false, and… true. WHAT THE HECK IS UP WITH THAT? It turns out that that is actually how JavaScript translates an array (into that string) and then comes back as true, since they now match.

 

One final reason for avoiding both coercion and the itch to not declare your variables involves programming architecture. Programming languages typically rely either on interpreters to translate the code to be run (like in Java™), or on compilers that do the same. What happens when your code is run is entirely dependent on those interpreters or compilers. Those things, in turn, can be highly dependent on the architecture of the computer that it’s being run on. Slight changes in either the compiler or in the architecture could drastically change the result of the program, even though it’s still the same code being run. What happens if, at some point down the line, someone makes an adjustment to the coercion system? How much code would that affect? Considering the ever-changing nature of the IT world we live in, that’s not a trivial concern. It’s also not a remote possibility either, with JavaScript due to receive the upgrade soon to ES2015. What will your code do when it gets upgraded?

 

So there you have it: ALWAYS DECLARE YOUR VARIABLES, EVEN IN DYNAMICALLY TYPED LANGUAGES. In the words of the Governator:

ahnold1.jpg

 

No, wait, that’s not right. Ah, here it is:

ahnold2.jpg

 

I’m finished ranting. Let me know what other topics you think I should touch on in the comments below, or feel free to message me on thwack®. Until next time, I wish you good coding.

Filter Blog

By date: By tag: