Hello Geek Speakers,
As 2015 comes to an end and 2016 begins, SolarWinds once again tapped its band of experts - the Head Geeks - to take a look inside their crystal balls and provide a glimpse into IT trends to watch for in the coming year. Will their predictions come to fruition in 2016? Let us know what you think and don't forget to revisit last years Prodictions to see where the Head Geeks were right or wrong.
Be sure to @mention each Geek to continue the conversation with them about their predictions. Here are each of their thwack handles:
Kong Yang - kong.yang
Patrick Hubbard - patrick.hubbard
Leon Adato - adatole
Thomas LaRock - sqlrockstar
As we did last year, we will continue to revisit these predictions and see if they are becoming a reality, or if they were entirely incorrect. However, that will likely NOT happen!
We hope you all have a Happy New Year - Enjoy!
After taking a look at what it means to monitor the stability of Microsoft Exchange, and choosing a product option that won’t keep your organizational staff busy for months configuring it we will now look at what it means to monitor Exchange Online in the Office 365 product platform. Yes, you did read that correctly, Exchange Online. Isn’t Microsoft monitoring Exchange Online for me? Well yes, there is some level of monitoring, but we as customers typically do not get frontline insight into the aspects of the product that are not working until something breaks. So, let’s dive into this a little bit further.
Exchange Online Monitoring
If your organization has chosen Exchange Online your product SLA’s will generally be around 99.9x%. The uptime varies from month to month, but historically they are right on track with their marketed SLA or they will slightly exceed it. As a customer of this product your organization is still utilizing the core Exchange features such as a Database Availability Group (DAG) for your databases, Outlook Web App, Azure Active Directory, Hub Transport servers, CAS servers etc, but the only difference is that Microsoft maintains all of this for your company. So assuming that Office 365/Exchange Online meets the needs of your organization this is great, but what happens when something is down? 99.9x% is good, but it’s not 99.999%, so there are guaranteed to be some occurrences of downtime.
Do I really need monitoring?
Not convinced monitoring is required? If your organization has chosen to move to the Exchange Online platform; being able to understand what is and isn’t working in the environment can be very valuable. As an Exchange Administrator within the Exchange Online platform, if something isn’t working I can guarantee that leadership is still going to look to me to understand why even if the product is not maintained onsite. Having a quick and simple way to see that everything is functioning properly (or not) through a monitoring tool can allow you to quickly provide your leaders the feedback they need to properly communicate to the business what is happening. Even if the only thing I can do next is contact Microsoft to report the issue.
Choose a monitoring tool for Exchange Online that will provide valuable insights into the your email functionality. My guidance here would be relatively similar to suggestions that I would make for Exchange On-Premises.
These considerations will help your determine which product solution is best for your organizational needs.
Monitoring the health of your organizations connectivity to the cloud is valuable to providing insight into your email system. There are options that with provide you and your organizational leadership instant insight into the health ensuring that there is an understanding of overall system health, performance and uptime.
Now that I’m finally recovered from Microsoft Convergence in Barcelona, I’ve had a chance to compare my expectations going into the event with the actual experience of attending. And as always for SolarWinds staff especially as Head Geek, that experience is all about speaking with customers. What was different about Convergence, aside from Barcelona always being it’s wonderful self, is that the mix of conversations tended more toward IT managers and less with hands-on admins. And lately I’m finding more and more managers who actually understand the importance of taking a disciplined approach to monitoring.
MS Convergence is focused on MS Dynamics, specifically Dynamics AX CRM. Generally when you look at the front page for a software product and the first button is “Find a partner” rather than “Live Demo”, or “Download”, it lets you know it’s a complex platform. But in the case of Dynamics AX, there’s a reason- a surprising number of integrations for the platform. There’s no way Microsoft can be expert in SAP, TPF, Informix, Salesforce and 496 other large platforms, so they rely on partners. For us in the SolarWinds booth of course, it meant we had lots of familiar conversations about complex infrastructure and the challenges of managing everything from networking to apps to storage and more.
One thing was clear, at least with the European audience- cloud for Dynamics customers seems to be driving more discipline not less. As admins cloud too often means shadow IT- even more junk that we need to monitor but with less control. With Dynamics the challenge isn’t remaining calm when your NetFlow reports show dozens of standalone Azure or AWS instances, it’s the reverse.
With a single Azure service endpoint for the platform, firewall and router traffic analysis uncovers dozens of niche domain publishers across the organization, each pushing critical business data to Dynamics in the cloud. Some integrations are well behaved, running by ops and monitored, but others are hiding under the desk of a particular clever analyst. These developers have credentials to extend the CRM data picture, but no budget to assure operations. Managers I spoke with were of course nervous about that, plus regulatory compliance and of course evolving EU data privacy laws- not a trivial CRM endeavor. (Makes me actually grateful for U.S. PCI, HIPA, SOX and GLBA, which though headaches are reasonably stable.)
It was interesting to look at the expo hall, filled with dozens of booths of boutique partners, each specializing in a particular CRM integration nuance and to remember the fundamental concerns of everyone in IT. We’re on the hook for availability, security, cost management and end user quality of experience and we share the same challenges. Establishing and maintaining broad insight into all elements of production infrastructure is not a first step, and certainly mustn’t be the last step. Monitoring is a critical service admins provide to management and teammates that makes everything else work. In hybrid IT, with on premises, cloud, SaaS and everything else in between, there are more not fewer dependences, more to configure and more to break. It always feels good to be needed just as much by the Big Systems owners and IT managers as it is to be the favorite tool of admins working helpdesk tickets.
Of course it was also nice to see that Microsoft finally streamlined the doc for Surface Pro 4.
I've had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company's journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.
If you'e followed the series this far, you've seen a progression through a series of tools being rolled out. My hope is that this last post in the series spawns some discussion around tools that are needed in the market and features or functionality that is needed. these are the top three things that we are looking at next.
Software Defined X
Looking to continue move into the software defined world for networking, compute, storage, etc. These offerings vary greatly and the decision to go down a specific path shouldn't be taken lightly by an organization. In our case here we are looking to simplify network management across a very large organization and do so in such a way that we are enabling not only IT work flows, but for other business units as well. This will likely be OpenFlow based and start with the R&D use cases. Organizationally IT has now set standards in place that all future equipment must support OpenFlow as part of the SDN readiness initiative.
Software defined storage is another area of interest as it reduces the dependency on any one particular hardware type and allows for ease of provisioning anywhere. The ideal use case again is for R&D teams as they develop new product. Products that will likely lead here are those that are pure software and open, evaluation has not really begun in this area yet.
DevOps on Demand
IT getting a handle on the infrastructure needed to support R&D teams was only the beginning of the desired end state. One of the loftiest goals is to create an on-demand lab environment that provides compute, store and network on demand in a secure fashion as well as provide intelligent request monitoring and departmental bill back. We've been looking into Puppet Labs, Chef, and others but do not have a firm answer here yet. This is a relatively new space for me personally and I would be very interested in further discussion around how people have been successful in this space.
Thank you all for your participation throughout this blog series. Your input is what makes this valuable to me and increases learning opportunities for anyone reading.
Recently we covered what it means to configure server monitoring correctly, and the steps we can take to ensure that the information we get alerted on is useful and meaningful. We learned that improper configuration leads to support teams that ignore their alerts, and system monitoring becomes noise. Application monitoring isn’t any different, and what your organization sets up for these needs is likely to be completely different than what was done for your server monitoring. During this article we will focus on monitoring Microsoft Exchange on-premises, and what should be considered when choosing and configuring a monitoring tool to ensure that your organizational email is functioning smoothly to support your business.
Email Monitoring Gone Wrong
In the early days of server monitoring it wasn’t unusual for system administrators to spend months configuring their server monitoring tool for their applications. With some applications, this decision may be completely appropriate, but with Microsoft Exchange I have found that server monitoring tools typically are not enough to successfully monitor your organizational email system. Even if the server monitoring tool comes with a “package” that is specifically for monitoring email. The issue becomes that by default these tools with either alert on tool much or too little never giving your application owner exactly what they need.
So how can your business ensure that email monitoring is setup correctly, and that the information received from that tool is useful? Well it really comes down to several simple things.
This approach will ensure that your email system will remain functional, and alert you before a real issue occurs. Not after the system has gone down.
Implementing the correct tool set for Microsoft Exchange monitoring is vital to ensuring the functionality and stability of email for your business. This is often not the same tool used for server monitoring, and should include robust reporting options to ensure your SLA’s are being met and that email remains functional for your business purposes.
Helllllllo Geek Speak Friends,
It is my pleasure to inform our awesome community that Geek Speak has been nominated for the 2015 Bytes that Rock Awards in the Best Software Blog Category. That said, I want to thank all of our amazing contributors and thwack community. All of you provide amazing content from blogs to in depth discussion. The Head Geeks and Ambassadors work very hard to crank out the content you enjoy so much.
Kong Yang - @kong.yang
Patrick Hubbard - @patrick.hubbard
Leon Adato - @adatole
Thomas LaRock - @sqlrockstar
Show them some love and Vote today!
The winner will be announced December 10th...so stay tuned.
Yet another tip from your friendly neighborhood dev.
By Corey Adler (ironman84), professional software developer
(With many thanks to my boss, John Ours.)
I knew it would eventually come to this. I knew I would inevitably have to deal with this issue, but I was hoping to put it off a bit longer. The thing is, I painted myself into a corner with my last post.
Well, here goes:
Always, always, FRAKKING ALWAYS declare your variables.
I don’t care if you’re using a language that allows for dynamic typing.
DECLARE. YOUR. VARIABLES.
Why am I so passionate about this? It probably has to do with the fact that I’ve had to take a look and fix code in script files where variables were not declared. Those were some of the most difficult tasks in my career, simply because it made it harder for me to keep track of everything going on. It might be easy and simple for you to describe what each of your variables is doing, but what about the next person that’s going to take a look at it? As a programmer you always have to think not just about what you’re doing right now, but what might happen days, months, or even years from now.
So what about someone, like yourself, who is not a professional programmer, who is just playing around with the code? Wouldn’t that argument work in your favor (since no one else will be working on it)? NOPE! Because that next person could easily be you, years later, looking back over at some piece of code. Sadly, you can’t even make the assumption that you, who wrote the code, will still have some idea what that code does when you go back to look at it.
The best thing for you to do is to assume that the next person to look at it won’t have any idea about what’s going on and needs you to help explain things. The reason you should declare the variables is to help with the self-documentation of your code for future use. It’s a great practice to get into, and no professional experience is needed.
That’s as far as styling goes, though. The primary reason to always declare your variables (as well as why you shouldn’t just declare them at the top of your file), is because of a concept called implicit scope. Implicit scope is the idea that one should only declare their variables as they happen to need them. The benefits of this approach are twofold. First, it reduces the number of variables in your program that only appear in certain contained blocks of code. For example, let’s say that you have a variable that you only use inside of a for-loop. Instead of having that variable taking up space (both in your file and in memory) for the whole length of the program, you have that variable declared and used only in that for-loop. That way no one looking at the code needs to worry about other locations that use that variable since it’s clearly contained within a specific block of code. Second, it makes it easier to debug your code should the need arise. When you see a variable declared right before its first use in your program, you know that it hasn’t been used previously. If you haven’t declared your variables, though, or you’ve declared them at the top of the file, someone (including you) who goes to look at the code later will need to check to see if that variable is being used anywhere else prior to that location, which can waste a lot of time and be a drain on one’s memory.
[ ] == 0; // result is true
0 == "0"; // result is true
So why, exactly, is coercion a bad thing? First of all, it requires each variable that you create to take a larger chunk out of memory than would normally be required. In statically typed languages, like C#, each different variable type is allocated a certain, fixed amount of memory, which would correspond to the size of the maximum values for those types. With coercion, all variables are treated the same, and are allocated the same amount of memory, sometimes way more than is necessary. All of this can add up in your application, potentially causing it to run slower.
var a = "1"
var b = 2
var c = a + b
In a case like this, what would you expect the result of c to be? If you think it should be 3, then you’re a victim of coercion. The same goes if you think it will be “3” (as in the string value). In actuality, the value populated in c is 12. Adding a + sign to a (as in “+a + b”) will give you back 3 (as a number; not a string). Or how about the following example:
16 == 
16 == [1,6]
"1,6" == [1,6]
So there you have it: ALWAYS DECLARE YOUR VARIABLES, EVEN IN DYNAMICALLY TYPED LANGUAGES. In the words of the Governator:
No, wait, that’s not right. Ah, here it is:
I’m finished ranting. Let me know what other topics you think I should touch on in the comments below, or feel free to message me on thwack®. Until next time, I wish you good coding.