1 2 3 4 Previous Next

Geek Speak

1,245 posts

According to Forrester, the SaaS application and software market is expected to reach $75 billion in 2014. Forrester goes on to quote that the, “browser-based access model for SaaS products works better for collaboration among internal and external participants than behind-the-firewall deployments.” When you think about it, today’s users at organizations spend most of their time accessing various “smart applications”. Whether it’s applications like Office 365 or salesforce, the user base accessing and using these applications are increasing tremendously.

      

Monitoring the performance of these applications will make a huge difference considering more and more users are adopting the use of SaaS and cloud based applications. Monitoring the load on the server, user experience, and bottlenecks are crucial to optimize the overall performance whether the application is hosted on-premise, in a public cloud, or using a hybrid approach. If your organization is using several SaaS based applications, you can look at the following considerations if you choose to monitor performance and availability of such applications.

         

Monitor User Experience: Since users are going to be accessing the application extensively, you should monitor overall user experience and users’ interaction with the application. This allows you to analyze performance from the end-users perspective.  Slow page load times or image matching issues can be a first indication that there’s an issue with the application.  By drilling in deeper, you can determine if the problem is related to a specific page and location.  Ultimately, monitoring user experience allows you to improve and optimize application performance, which results in improved conversion.

     

You could also look at this in two ways: from the perspective of the service provider, and from the perspective of the service consumer.

     

Service providers need to focus on:

  1. User experience: It’s likely service providers have SLAs with end users and they need to demonstrate they are meeting uptime and other SLA considerations.
  2. Infrastructure: There are many factors that can cause a service failure, therefore all aspects of the infrastructure must be monitored. These aspects include applications, servers, virtual servers, storage, network performance, etc.
  3. Integration services (web services): Services provided are dependent on other SaaS providers or internal apps.

        

Service consumers need to focus on: 

  1. User experience: If part of your web application consumes web services, this can be the first indication of a problem.
  2. Web service failures: This can help identify a failure in communication.

     

Focusing on these aspects are essential when you’re monitoring SaaS applications. These key considerations help IT admins to take proactive measures to ensure applications don’t suffer downtime during crucial business hours. At the same time, each application will be optimized by continuously monitoring, thus improving overall efficiency.

            

Check out the online demo of Web Performance Monitor!

TiffanyNels

Countdown To FINAL ROUND

Posted by TiffanyNels Apr 9, 2014

Ok, fellow thwackers... we are getting down to the wire.  The next 12 hours of voting will determine which games go on to compete to win the HIGH SCORE or go GAME OVER.

 

If you haven't voted for your favorite game, you still have time on the clock. The final four round closes at MIDNIGHT tonight.  Call of Duty is facing Grand Theft Auto in a first-person shooter match up, while the other side of the bracket is a match up between two iconic games which many of you cut your gaming teeth on, Zelda v. Doom. 

 

Get out there and campaign for your favorites, vote early (but not often). There is still time on the clock to change the outcome of this round.

 

So, get to it. We will see you for the final boss battle.

 

Game on, gamers!

This is a very common predicament that most SQL developers and DBAs face in their day-to-day database encounters – regardless of the relational database platform being used. “Why is my database slow?” This could be for many reasons, with one of the hard-to-isolate reasons being slow query processing and longer wait times.

 

Reasons for Slow Database Performance

  • Network: There could be network connection issues
  • Server: The server workload on which the database is running could be high which makes database processing slower
  • Database/Query: There may be redundant query lines, complex or looping syntaxes, query deadlocks, lack of proper indexing, improper partitioning of database tables, etc.
  • Storage: Slow storage I/O operations, data striping issues with RAID

 

While network issues and server workload can be easily measured with typical network monitoring and server monitoring tools, the real complexity arises with the understanding of the following database & query-related questions:

  • What query is slow?
  • What is the query wait time?
  • Why is the query slow?
  • What was the time of the day/week of the performance impact?
  • What should I do to resolve the issue?

  

Query response time analysis is the process of answering the above questions by monitoring and analyzing query processing time and wait time, and exploring the query syntax to understand what makes the query complex. We can break down query response time into 2 parts:

  1. Query processing time – which is the actual processing time for the database run the query. This includes measuring all the steps involved in the query operation, and analyzing which step is causing processing delay.
  2. Query wait time – which is the time of a database session spent on waiting for availability of resources such as a lock, log file or hundreds of other wait events or wait types

 

Response Time = Processing Time + Waiting Time

DPA.png

Query waiting time is determined with the help of the wait time metric called wait type or wait event. This indicates the amount of time spent while sessions wait for each database resource.

  •   In SQL Server®, wait types represent the discrete steps in query processing, where a query waits for resources as the instance completes the request. Check out this blog to view the list of common SQL Server wait types.
  • In Oracle®, queries pass through hundreds of internal database processes called Oracle wait events which help understand the performance of a SQL query operations. Check out this blog to view the list of common Oracle wait events.

 

A multi-vendor database performance monitoring tool such as SolarWinds Database Performance Analyzer will help you monitor all your database sessions and capture query processing and wait times to be able to pinpoint bottlenecks for slow database response time. You can view detailed query analysis metrics alongside physical server and virtual server workload and performance for correlating database and server issues. There are also out-of-the-box database tuning advisors to help you fix common database issues.

A recent survey commissioned by Avaya reveals that network vulnerabilities are causing more business impacts than most realize, resulting in revenue and job loss.


  • 80% percent of companies lose revenue when the network goes down; on average, companies lost $140,000 USD as a result of network outages
  • 1 in 5 companies fired an IT employee as a result of network downtime

 

And.....


  • 82% of those surveyed experienced some type of network downtime caused by IT personnel making errors when configuring changes to the core of the network
  • In fact, the survey found that one-fifth of all network downtime in 2013 was caused by core errors

 

Cases of Device Misconfigurations Leading to Network Downtime


Real-world scenario 1: Company Websites Down, Reason Unknown

Soon after a software giant had a big advertising campaign with major incoming Web traffic expected, their websites went down. Unable to pinpoint the actual cause of downtime to being a configuration change made earlier, the websites remained unreachable for a few hours. Taking time to identify the issue and re-establish connectivity, the organization suffered huge losses in revenue from the millions of dollars spent on the promotional campaign.


Troubleshooting: With the current network situation, all thoughts pointed to a core router failure or a DoS attack. On checking and confirming all critical devices to be ‘Up’, the next assumption was that the network was the victim of a DoS attack. But again, seeing no traffic flood on the network the root cause had to be something else. After hours of troubleshooting and individually checking core and edge device configurations, it was later found that the WAN router had a wrong configuration. The admin who made the configuration change, instead of blocking access to a specific internal IP subnet on port 80, ended up blocking port 80 for a wider subnet that also included the public Web servers. This completely cut off Webserver connectivity to inbound traffica typo that cost the company millions!


Real-world scenario 2: Poor VoIP Performance, Hours of Deployment Efforts Wasted


A large trading company uses voice and video for inter-branch and customer communication. To prioritize voice and video traffic and ensure quality at all times, QoS policies are configured across all edge devices over a weekend. However, following the change, the VoIP application begins to experience very poor performance.

Troubleshooting: QoS monitoring suggests that VoIP and video has been allocated lesser priority than required. Instead of marking VoIP traffic to EF (Expedited Forwarding) priority, the administrator ended up marking VoIP packets to DF (Default class) resulting in the poor performance of VoIP and video traffic. Correcting the VoIP traffic setting to EF on all edge devices meant many more hours of poor performance and loss of business.


Remediation


The network downtime in the above two cases could have been avoided via simple change notification and approval systems.


In the first case, notifying other stakeholders about the change would have helped correlate and identify the recent change as a possible cause of the issue. Troubleshooting would have been faster and normalcy restored by quickly rolling back the erroneous change.


In the second case, a huge change involving critical edge devices should have gone through an approval process. Having the configuration approved by a senior administrator before deployment can help identify and prevent errors that can bring the network down.


Both cases reflect poorly on the administrators. Bringing down the network was clearly not intentional!


Human errors are expected to occur in daily network administration. However, considering the impact a bad change can have on both the company and the person, it’s imperative that there are NCCM processes put in place. To reduce human errors and network downtime, use a tool that supports NCCM processes such as change notification and approvals.


Check out this paper for more tips on reducing configuration errors in your network.

Tech Tip Banner Image.jpg

It's amazing,


I used to think, my network was running pretty well, a few hiccups now and then, but by and large, I got by and thought everything was business as usual, the boss would ask me: "Craig how is the network today?" and I would say "it's fine boss". Flash ahead to fiscal year's end, and there is some money left in the budget and I am given fifteen minutes to decide what tools to buy otherwise the money doesn't get spent. Most people wouldn't be prepared for this, but I was ready. I pulled out my wish list and said I want SolarWinds Orion NPM. After a few frantic calls to the vendor of choice, it was all set. I'm thinking, this is great that I got this, but I probably won't use it that much, because we have no problems.. and it's going to take forever to install, and the learning curve will be immense. In reality, the software was really easy and intuitive to install; point here, click here, answer a few questions and it was done.But, why was there so much red everywhere? This must be a software bug, because my network has no problems..  I spent the rest of the day tweaking and twiddling about, and I have to say, it was like turning the light on in a dark room. I was able to solve a lot of long standing problems, some of them that I didn't even know I had.

 

There was the switch with a bad blade that always had problems intermittently, but never failed, so no alarm was tripped. After being alerted to this, I had the blade replaced and things began to run clean. Sometimes the network was slow but I never could attribute it to any single cause, it usually coincided with a home game at the local ballpark. Turns out a lot of non-work related web streaming was going on, and some other folks were enjoying Netflix.

There was the router that went down even though it had redundant power supplies, but no one ever saw when the first one went down, but people sure noticed went the second one failed.. I setup an alert to monitor this and several other things. The major cost of IT where I work is no so much the hardware or software, it's the cost of actually scheduling time with union, paying for the lift truck, just the logistics were mind boggling and intensive and nobody wanted downtime. I am now able to easily automate and monitor my network, and do a lot more proactive monitoring and forecasting. I am just as busy as I was before, the difference is now I have a better view of what is going on with the network and I can act proactively instead of reactively. I have a lot less stress. I have lost fifty pounds and I have a corner office.. lol.. just kidding.. but I do get to sleep through the weekends without the pager going off at 3am and I still go to the same amount of meetings, but now they are more about future planning instead of postmortems.

 

What about you guys? Can anyone share a general process of things you might monitor and proactively forecast? Any tips and tricks pertaining to procedure are greatly appreciated!

I am now a believer in network performance management. It has really paid for itself many times over.

Hi there! I’m michael stump, a technology consultant with a keen focus on virtualization and a strong background in systems and application monitoring. I hope to spark some discussion this month on these topics and more.


Last month, I published a post on my personal blog about the importance of end-to-end monitoring. To summarize, monitoring all of the individual pieces of a virtualization infrastructure is important, but it does not give you all of the information you need to identify and correct performance and capacity problems. Just because each individual resource is performing well doesn’t mean that the solution as a whole is functioning properly.


This is where end-to-end monitoring comes in. You’re likely familiar with all of the technical benefits of e2e monitoring. But let’s talk about the operational benefits of this type of monitoring: reducing finger-pointing.


In the old days of technology, the battle lines between server and network engineers were well-understood and never crossed. But with virtualization, it’s no longer clear where the network engineer’s job ends and the virtualization engineer’s job begins. And the storage engineer’s work is now directly involved in both network and compute. When a VM starts to exhibit trouble, the finger-pointing begins.


“I checked the SAN, it’s fine.”

“I checked the network, it’s fine.”

“I checked vSphere, it’s fine.”


Does this sound familiar? Do you run into this type of fingerpointing at work? If so, share a story with us. How did you handle the situation? Does end-to-end monitoring help this problem?

Whether it’s Hyper-V® or VMware® or any other virtual environment, growth is inevitable for virtual machines (VM) and workload in any data center setup. IT teams always want to know how many VMs can be created on a physical host, and how much more VM workload can my host resources support? Especially for Hyper-V environment, Microsoft® has augmented and expanded the limits of VM capacity with Hyper-V 2012.

 

According to this post on Perti, these are the capacity and scalability limits of Hyper-V VMs in windows Server 2012 – which is a drastic improvement on Windows Server 2008 & 2008 R2.

  • Virtual processors per VM: 64
  • Logical processors in hardware: 320
  • Physical memory per host: 4 TB
  • Memory per VM: 1 TB
  • Nodes in a cluster: 64
  • VMs in a cluster: 8000
  • Active VMs: 1,024

So, what happens when all these limits are reached? You just need to add more VMs. And that’s not an easy job for the IT admin. You have to figure out the budget, host resource procurement, and carry out the actual VM creation and assignment. But this is NOT the smart and cost-effective way to scale your VM environment.

 

Capacity planning is the process of monitoring VM and host resource utilization, while being able to predict when the VMs will run out of resources and how much more workload can be added to them. The benefit is that you will be able to optimize your Hyper-V environment, chart usage trends, reallocate some unused resources to critical VMs, identify and control VM sprawl, and right-size the entire VM environment, without just making a case for resource procurement.

 

The proactive capacity planning approach would be to identify capacity bottlenecks so that you’re in a position to make an informed decision about VM expansion.

 

Top Reasons for Capacity Bottlenecks

  • Uncontrolled VM sprawl
  • Enabling HA without accounting for failover
  • Increase in VM reservation
  • Resource pool config changes
  • Natural resource utilization growth
  • Workload changes

 

Capacity Management: “What If” Analysis

The next step is to perform “What If” analysis to determine how much more load existing VMs will sustain with, and how many more VMs can be created for a specified workload. Third-party virtualization management tools, such as SolarWinds Virtualization Manager provide dedicated capacity management functionality that allows you to perform VM capacity estimations and understand possible expansion.

VMan 1.png   VMan 2.png

   

Key Questions to be Answered While Performing Capacity Planning

  • How can I detect capacity bottlenecks?
  • How can I predict capacity bottlenecks before they happen?
  • How may VMs can I fit within my current footprint?
  • What if I add more resources (VMs, hosts, storage, network, etc.) to my environment?
  • Which cluster is the best place for my new VM?
  • When will I run out of capacity?
  • How much resource is my average SQL Server®, Exchange, etc. VM using?
  • How much more resources do I need to buy and when?
  • How can I right-size my VMs to optimize existing capacity?

 

The below capacity planning dashboard in Virtualization Manager tracks and trends CPU, storage IOPS, memory, network throughput and disk space and provides you details into how many more large, medium and small VMs you can add to your Hyper-V and other clusters.

 

Benefits of Capacity Planning

  • Monitor Hyper-V capacity operations and resource utilization, and forecast resource depletion
  • Optimize IT resources with business requirements and make informed purchase decisions on host resource procurement, VM creation, and overall budget planning
  • Gain insight into VM placement between or within Hyper-V clusters to deploy VMs across clusters efficiently
  • Pinpoint zombie or rogue virtual machines and over or under-allocated VMs to right-size your Hyper-V environment
  • Determine when and where Hyper-V bottlenecks will occur and identify the solutions

 

Read this TechNet post to learn more about Microsoft Windows Server 2012 Hyper-V Scalability limits.

 

Watch this short video to learn more about capacity planning and management - explained by Eric Siebert (vExpert)

 

 

Its another year and another 5 stars for SolarWinds Log and Event Manager in SC Magazine’s SIEM Group test!  The reviewers tested every aspect of our SIEM - with a dual focus on log and event management as well as strong attention to usability, scalability, reporting, third party support, and ease of implementation.

 

The verdict? “This is a solid product, worthy of consideration.”


SolarWinds has put together another outstanding product. The SolarWinds Log & Event Manager (LEM) offers a quality set of log management, event correlation, search and reporting facilities. This gives organizations the ability to collect large volumes of data from virtually any device on a network in real time and then correlate the data into actionable information. The company does this by paying attention to the need for real-time incident response and effective forensics, as well as security and IT troubleshooting issues. Another winning set of features are the quality regulatory compliance management and ready-made reporting functions.”

 

With the increase of attacks on compliant companies, the previously separate focuses of security and compliance are converging.  At the same time, attack methods are growing more sophisticated and harder to detect.  At SolarWinds, we are dedicated to providing the situational awareness and visibility previously only available to large enterprises to companies of every size.  We are pleased that SC Magazine saw the results of our efforts.

 

Their one weakness we can easily address: “Consider a Ticket Management System for smaller companies”.  We offer Alert Central as a free ticket management system that easily integrates with LEM.  For those that need more robust reporting and tracking, we also offer Web Help Desk as a low cost alternative.

 

To read the review, visit http://www.scmagazine.com/solarwinds-log--event-manager-v57/review/4153/

When Windows 8 launched, I wrote this scathing review, "Microsoft, have you lost your mind again?"  It was a bloodbath for Microsoft that day. Two years later, I just finished installing/tweaking Windows 8.1 here at the office. (It wasn't my choice.)

 

Windows 8 vs. 8.1

You can read my full review of Windows 8 at the link provided above, and I stand by it. Now let's examine the tweaks Microsoft has made to v8.1:

  • Lo and behold, the Start button/menu is back! (Sorta). Back to the way things were. An improvement, I guess.
    start.png
  • Aero glass effect, which I liked, needed to be installed using a third-party app. Still got it though.
  • Another "improvement:" I can now launch into desktop mode on boot (something previous versions did naturally) bypassing those ugly and useless tiles.
  • Icon spacing. This tweak was available in Windows 7 and earlier through the UI. In 8.1 I had to implement a registry hack, as evidenced by my MRU list in the Start menu above.
  • I'm experiencing a lag when typing versus what I see on the screen from time to time. Annoying but this does not happen too often, although it is happening as I write this.
  • OS seems a little sluggish. Time, and benchmarks, will tell.
  • Compatibility: Surprisingly, everything seems to work fine. Good job!
  • I've also learned that you can mount and unmount ISOs through the OS. No third party app needed. Sweet.
  • The shell graphics are more appealing and informative as well, but I think this may take away from performance. I still need to tinker more just to make sure.

 

Overall, I cannot complain about Windows 8.1. Let's slow down though. I won't praise it either. I still prefer Windows 7 any day of the week and twice on Sunday. (Funny, it's like the VPs over at Microsoft actually read part one of this article and listened! Go figure.) There is still work to be done though. The "working" part of the OS needs to be refined more to perform more like Windows 7 IMHO. At least this is a step in the right direction.

 

Office 2013

Office 2013 was also part of my transition. Just want to say a few words while I'm here:

  • The display is very flat. No appearance of texture. Difficult to distinguish between the "draggable" top portion of the app and the rest of it. And all the apps look the same. Very bland. See the pics below for comparison:
    What my Outlook used to look like - Outlook 2007 (Note: This is a random pic from Google.)
    old.png
    My current version of Outlook 2013 - Flat, no 3-D texture or feel. Looks like paper.
    inbox.png
  • Another observation was that they changed the way VBA understands VB. In other words, I had to re-write some of my code and register some older ActiveX controls to get my apps and macros working again. Took some time but I got it done.

Again, nothing terribly bad here, but I think we could all do without those ribbons. The real estate they chew up is just too valuable.

 

The Verdict.

Overall, not bad, but don't rush to upgrade just yet.

 

My Motto

"If you're happy with your OS, you can keep your OS. If you like your version of Office, you can keep your version of Office, period. End of story." (Wait. Why are you kicking me off of my current OS and Office versions and forcing me to "upgrade"? I was happy and liked what I had. You said over and over that I can keep what I liked! Is this better for me, or better for you?) Hmmm...see what I did there?

TiffanyNels

Do you want to Continue?

Posted by TiffanyNels Apr 1, 2014

The battle is on, Round 1 is complete and the community has spoken. 

 

  • Halo falls to Call of Duty
  • Ms PacMan, too much like PacMan.  Donkey Kong prevails!
  • Time invested in WoW creates a higher level of commitment than Baldur's Gate
  • The all-out melee of Smash Bros may just have beat Punch Out by virtue of the plethora of favorite characters NOT represented in the bracket
  • Golden Eye 007? Huh?

 

All of this means that we are down to 16 gaming heavy-weights. And, now the match ups get a little more complicated. Given the fact that MOST of these games have very little in common. How will you judge Madden NFL versus Galaga? Half Life versus Mario Cart? Can Mortal Combat stand against the game that spurned a movie about the competition for a HIGH SCORE?

 

Head here to view all of the match ups and cast your vote to see who will move one to the for the honor of representing each of the four bracket divisions.

 

We are getting close to the end, don't miss your chance to chime in and push your favorites to victory.

 

And, while we are at it, let us know who you think will reign supreme...

 

Round 2 VOTING is HERE.  And remember, you have to be logged in to comment and vote. This round ends tomorrow (April 2) at MIDNIGHT.

 

Oh, and by the way, I was informed that I unfairly worded the Zork question which was confirmed by zachmuchler's IMDB post. While that game is not specifically Zork, it was based on Zork (rights can be hard to secure, you know).  For that reason, we will award an extra 50 points to the following thwack members:  crippsb, zachmulcher and bradkay.  Congrats, and use those points wisely.

 

If you are just joining us, you can catch up here (Let's LEVEL UP!) and here (The Cheat).

When storage costs keep getting higher for an organization, there isn’t much a storage admin could do to manage data storage across various storage devices and media. There has to be some prioritization of data stored and it should be decided where to storage what data. If it’s critical data requiring frequent access, then it’d need a high performance and expensive storage array, while data of low importance and backup might be stored in slower and cheaper disks. This process of moving storage between different storage types is known as storage tiering.

 

Examples of Storage Tiering

Tier 1

Mission-critical & recently accessed files

Stored on expensive and high-quality media such as double-parity RAIDs

Tier 2

Seldom-used, classified files, backup

Stored on less expensive media in conventional SAN

Tier 3

Event-driven, rarely used, unclassified files

Stored on recordable compact discs or tapes

 

These are just some examples to understand the concept. Actual storage tiers may vary and depend on your organizational storage access, requirement and hardware availability.

 

Automated Storage Tiering

To overcome the manual efforts involved in moving storage across different tiers, storage vendors came up with the concept of automated storage tiering – which is a storage software management feature that dynamically moves information between different disk types and RAID levels to meet space, performance and cost requirements. [1]

  

Storage admins can decide to dynamically move storage blocks or LUNs across different storage types for different reasons.

  • Progression is where highs accessed data and critical files are moved from cheaper and low performance storage media to high performance SAS or SSDs.
  • Demotion is where infrequently used data can be automatically moved to slower, less-expensive SATA storage.

 

The movement of LUNs across storage tiers is managed by auto-tiering algorithms programmed in the storage management software. Some software use heat-map approaches to determine storage access and usage over a particular time period to dynamically move storage from one tier to another. There are many leading vendors in the market that support this functionality. Some of those are:

  • Dell® Compellent – Data Progression
  • EMC® – Fully Automated Storage Tiering (FAST)
  • HP® 3PAR – Adaptive Optimization
  • IBM® – Easy Tier
  • Hitachi® – Data Systems Dynamic Tiering

 

Some vendors claim that it is possible to perform real-time automated-tiering on storage blocks in small sizes – ranging in KBs to a few MBs.

 

Third-party storage management software will also help provide information on disk storage for LUNs based on storage tier, tiering policy applied and whether auto-tiering is enabled or not. You can check out SolarWinds Storage Manager ver 5.7 which will extend support for EMC FAST VP implementation on VMAX/Symmetrix and VNX arrays. Learn more about Beta trial of Storage Manager ver 5.7 >>

Bronx

VBA - A quick lesson

Posted by Bronx Mar 27, 2014

Okay, so by now you should know that I like tinkering...and showing you how I do it. See the following for two examples:

Today's lesson will be in VBA for Outlook. The challenge? Schedule a meeting/appointment on a public calendar for all to see while simultaneously sending a specific person a one day reminder to take care of the newly scheduled event. Simply put, control and manipulate both public and private calendars.

Time for the pictures! Here's what I came up with using the Outlook VBE, referencing the Calendar control:

art.png

Simple right? Fill in the fields, pick a day, then Submit. Here's the result of hitting that li'l Submit button:

My personal calendar gets a 24 hour reminder scheduled at the right time.
r2-private.pngr3-private.pngr1.png

Public calendar also gets updated so others can schedule around what is going on.

bcpub.png

Before I show you the code

If you do not know what VBA is or how to access it in Outlook, go figure that out first. The form (Article Scheduler) at the top of this page lives here in the Outlook VBE:

f1.png

You'll need to create the form with the control names I have in the code below. Also, to run this from your Outlook toolbar, create a new Module (I have two above). In the new module, enter these three lines of code:


Sub RunScheduler()

    Scheduler.Show

End Sub

 

Once complete, you can drag the macro button to your toolbar.

tps.png

This is not a tutorial. Rather, it is an example you can tailor to your own needs by examining the code and changing what you want to get the desired effect. A litlle VBA research on your part may be in order.

 

The Code (Put this in the code section for the Scheduler form):

 

    Dim ola As Outlook.AddressList

    Dim ole As Outlook.AddressEntry

    Dim WriteDate As Object 'Date

    Dim EmailAddy As String

 

    Private Sub Calendar1_Click()

        txtMsg.Text = ""

    End Sub

 

    Private Sub CheckBox1_Click()

        CheckBox1.Value = Not CheckBox1.Value

    End Sub

 

    Private Sub ComboBox1_Change()

        txtMsg.Text = ""

    End Sub

 

    Private Sub CommandButton1_Click()

        Dim myItem As Object

        Dim myRequiredAttendee, myOptionalAttendee, myResourceAttendee As Outlook.Recipient

 

        If ComboBox1.Text = "" Then MsgBox("Really? Step 1 is entering an author's name.")

        If CheckBox1.Value = True Then

            Dim objOutlook As Outlook.Application

            Dim objOutlookMsg As Outlook.MailItem

            Dim objOutlookRecip As Outlook.Recipient

            Dim objOutlookAttach As Outlook.Attachment

 

            EmailAddy = ComboBox1.Value

            WriteDate = Calendar1.Value & " 8:00 AM"

 

            myItem = Application.CreateItem(olAppointmentItem)

            With myItem

                ' Add the To recipient(s) to the message.

                myRequiredAttendee = .Recipients.Add(EmailAddy)

                myRequiredAttendee.Type = olTo

                ' Resolve each Recipient's name.

 

                For Each myRequiredAttendee In .Recipients

                    myRequiredAttendee.Resolve()

                Next

 

            End With

 

            myItem.MeetingStatus = olMeeting

            myItem.Subject = "Write an article for tomorrow, due at 8am."

 

            If txtTitle.Text <> "" Then

                myItem.Body = txtTitle.Text & " for " & txtForum.Text & "."

            Else

                myItem.Body = "Write an article for tomorrow, due at 8am."

            End If

 

            myItem.Location = "Your Desk."

            myItem.Start = WriteDate

            myItem.Duration = 90

            myItem.ReminderMinutesBeforeStart = 1440

            myItem.ReminderSet = True

 

            myRequiredAttendee = myItem.Recipients.Add(EmailAddy)

            myRequiredAttendee.Type = olRequired

            myItem.Send()

            ComboBox1.Value = ""

            txtMsg.Text = "Reminder sent to " & EmailAddy & "."

 

            Dim myNameSpace As Outlook.NameSpace

            Dim myFolder As Outlook.folder

            Dim myNewFolder As Outlook.AppointmentItem

 

            myNameSpace = Application.GetNamespace("MAPI")

            myFolder = myNameSpace.Folders.Item(3)

            SubFolder = myFolder.Folders("All Public Folders").Folders("Your Public Sub Calendar").Items.Add(olAppointmentItem)

 

            With SubFolder

                .Subject = EmailAddy

                .Start = WriteDate

                .Save()

            End With

 

        End If

 

    End Sub

 

    Private Sub UserForm_Initialize()

        Calendar1.Value = Now

 

        ola = Application.Session.AddressLists("Global Address List")

        For Each ole In ola.AddressEntries

            ComboBox1.AddItem(ole)

        Next

        ola = Nothing

        ole = Nothing

 

    End Sub


Welcome to the SolarWinds Blog Series, ‘Basics of Routing Protocols’. This is the last of a four part series where you can learn the fundamentals of routing protocols, types, and their everyday applications in network troubleshooting.

In the previous blog, we discussed Open Shortest Path First (OSPF), OSPF message types, and the protocol’s advantages and disadvantages. In this blog, we’ll shed some light on another popular routing protocol: EIGRP (Enhanced Interior Gateway Routing Protocol).

 

What is EIGRP (Enhanced Interior Gateway Routing Protocol)?

EIGRP, a distance vector routing protocol, exchanges routing table information with neighboring routers in an autonomous system. Unlike RIP, EIGRP shares routing table information that is not available in the neighboring routers, thereby reducing unwanted traffic transmitted through routers. EIGRP is an enhanced version of IGRP and uses Diffusing Update Algorithm (DUAL), which reduces time taken for network convergence and improves operational efficiency. EIGRP was a proprietary protocol from Cisco®, which was later made an Open Standard in 2013.

 

EIGRP Packet Types

Different message types in EIGRP include:

  • Hello Packet – The first message type sent when EIGRP process is initiated on the router. Hello packet identifies neighbors and forms adjacencies while being multicast every 5 seconds by default (60 seconds on low bandwidth network).
  • Update Packet – Contains route information that is only forwarded when there is a change. They are only sent to the routes that have partial updates. If there’s a new neighbor discovered, the packet is sent to the router as a unicast.
  • Acknowledgement – This is unicast as a response to Update packet by acknowledging when they receive an update.
  • Query – This packet is sent to query routes from neighbors. When a router loses a route while sending the multicast, Query packet is sent to all neighboring routers to find alternate paths for the router.
  • Reply – These are unicast by routers that know alternate routes for the neighboring routers queried on a network.


EIGRP – Pros and Cons

Speedy network convergence, low CPU utilization, and ease of configuration are some of the advantages of EIGRP. The EIGRP routers store everything as routing table information so they can quickly adapt to alternate routes. The variable length subnet mask reduces time to network convergence and increases scalability. EIGRP also includes MD5 route authentication. Compared to RIP and OSPF, EIGRP has more adaptability and versatility in complex networks. EIGRP combines many features of both link-state and distance-vector. Since EIGRP is mostly deployed in large networks, routers tend to delay sending information during allotted time, which can cause neighboring routers to query the information again, thus increasing traffic.


Monitor Routers Using EIGRP in Your Network

Advanced network monitoring tools have the ability to monitor network route information and provide real-time views on issues that might affect the network. Using monitoring tools in small networks, you can view router topology, routing tables, and changes in default routes. You can also check out overview blogs on RIP and OSPF routing protocols.

John Herbert.jpg

 

We’re getting close to the end of the month, so that must mean it’s time for another installment of our ever-popular IT Blogger Spotlight series.

 

I recently caught up with John Herbert of LameJournal fame, who was kind enough to answer a few questions. In addition to following John’s exploits on LameJournal, you can keep up with him on Twitter, where he’s affectionately known as @mrtugs.

 

SWI: Tell me about LameJournal and how you got started with it.

 

JH: A while back I purchased LameJournal.com with the intent of grabbing the freely available LiveJournal source code and running a spoof site, as if the site itself weren’t sufficiently self-derivative. I sat on the domain for at least five years and failed to do anything with it, mainly because it sounded like an awful lot of work just for a joke.

 

Then in April 2011, I went to a flash photography seminar and was so buzzed about the event I felt that I just had to share my enthusiasm, so I dusted off the domain, installed Wordpress and created my first post beyond the default “Hello World.”

 

That post was looking a bit lonely on its own, and somebody had been asking me to explain Cisco's Virtual Port Channel technology to them, so I put out a post on VPC the next day. Like somebody with a new toy, I then started taking things that were on my mind and turning them into posts, because hey, somebody might be interested, right? Cisco Live, Visio, TRILL, some training I went on, and so forth. While the blog subtitle is “Networking, Photography and Technology,” it became evident very quickly that the content was going to be primarily about networking, with an occasional glance at photography and other technology.

 

SWI: And as they say, the rest is history, right?

 

JH: Yep. Really, it’s ended up being an outlet for anything I think is interesting. One of the things I found hardest when I started blogging was to get over the feeling that the information I wanted to share might not be noteworthy, or is already covered elsewhere. My attitude now—and the one I share with others to encourage them to blog, too—is to say, “OK, was this new or interesting to me? Then blog about it.” After all, if it’s new to me, then it’ll be new to somebody else out there, too, which means I should write the post!

 

With that said, I still get the most pleasure from writing about something that will help other people in some way, especially if I can fill a gap in the information already out there and provide a unique resource. I don’t actively look for those topics, but they’re great when they crop up. Beyond that, I usually writing about real situations—either current or past—that were interesting or challenging so that (a) I have a record of it, and (b) it might save somebody else some trouble later.

 

SWI: I like the way you think! Do you find you get more interst in certain topics than others?

 

JH: Experience has shown there’s not really a good predictor as to whether a particular post will generate interest, but I find there are two general types of posts that have done better. The first are posts describing problems I’ve had and, if possible, how I fixed them. They’re successful because when somebody else experiences the same issue, they search the Web and my post shows up in the results. Even if there’s a frustrating lack of solution, I personally find great solace in knowing that I’m not the only idiot with a particular problem. For example, I’ve written posts about Cisco AnyConnect ActiveX, Office 2013 and iTunes Match that were very popular over time; they seem to have lasting appeal. The other category of posts that do better are those covering new technologies, where information out there is a bit patchy. Examples include TRILL and Cisco's VPC, and more recently discussions about software defined networking. Posts that are topical may be successful short term, but they tend to have less long term interest, which makes sense when you think about it.

 

SWI: Definitely. So, what do you do professionally?

 

JH: I’m a consultant for a professional services company. So, to put it simply, I move packets for other people. Consulting is interesting in part because I get to see so many different networks, teams and company structures, rules, procedures and architectures. I like the insight this gives me, and I find it fascinating to see what each client determines is most important for their network.

 

SWI: Very interesting. How did you get into IT in the first place?

 

JH: I kind of fell into it, really. I've always enjoyed working with computers and was programming SAS (database/statistics software) when a friend suggested I should join the company he worked for and do networking. I really didn't get what it was that he did, despite him trying to explain it, but the pay sounded good so I made the leap and haven't looked back.

 

SWI: What are some of your favorite tools as an IT pro?

 

JH: From the SolarWinds portfolio, Engineer's Toolset has been on my work laptop builds almost continually since the year 2000. Fun fact, I actually joined International Network Services in 1999, and that’s where Don Yonce (one of SolarWinds’ co-founders) was also working. So, I have always felt like I have a special relationship with the SolarWinds products. So, I also typically have SolarWinds free tools installed on my own machines (the Advanced Subnet Calculator is a very old friend of mine!). I’m currently using a MacBook right now, so I’m feeling a little lonely, but since my other favorite networking tool is Perl, I’m all set for that at least. The ability to program in one scripting language or another is a huge benefit to any network engineer in my opinion, and was so even before SDN reared its head.


SWI: And what are you up to these days when you’re not working or blogging?

 

JH: I have a wife and three school-aged children, a home network to perfect and meals to cook. So, beyond working and blogging I mainly eat and sleep. Occasionally, I play some piano, which I find very cathartic, and I’m also on the board of directors for my home owners’ association, which eats up some more time. As my blog title suggests I also enjoy photography, and I really should get out and do more of it.

 

SWI: Well, I hope you’re able to. Switching directions a bit, what are some of the most significant trends you’re seeing in the IT industry right now and what do you this is to come?

 

JH: In the networking world, the buzzword-du-jour is SDN. One way or another, there’s a huge paradigm shift occurring where pretty much every vendor is opening up access to their devices via some form of API, and there’s a growing new market for controllers and orchestrators that will utilize those APIs to automate tasks. Those tasks can be anything from configuring a switch port or VLAN as part of a service chain to instantiate a new service to programming microflows on a switch. I said “devices,” but lest it sound like this just means hardware—the network “underlay”—SDN also extends to the software side of things too, both in terms of encapsulations like VXLAN, an overlay, and features like network function virtualization, which also offers some exciting possibilities.

 

My one fear is that SDN encompasses so much, it’s in danger of becoming another meaningless marketing term like “cloud,” and I'm waiting to see the first “SDN Compliant” sticker on a box. That aside, the innovation in the SDN space, both proprietary and open source, is redefining the way networks can be built and operated, and it’s a very exciting time to be in this industry. The downside is that there’s so much going on, there aren’t enough hours in the day to keep up with all the blog posts that could be written!

 

SWI: Well, that’s all of my questions. Is there anything else you’d like to add?

 

JH: If I may, I’d like to give a shout out and a thank you to all the networking and technology bloggers out there. In many IT and networking teams, there’s that one person who hoards information and believes they’re creating job security by being the only one to understand something, and thus they resist sharing that knowledge with others. Blogging is the polar opposite of that; bloggers take the opportunity to share information that may improve somebody else’s ability to do their job, help them avoid a problem before it happens or just make you smile because somebody else is experiencing the same challenges as you. I stand in awe at the quantity and quality of posts that some people manage to create. I use an RSS reader so that I can follow a large number of those blogs in a manageable way, and I strongly recommend RSS.

 

I would also encourage anyone who reads this to consider whether or not they have something they could share with others via a blog. I look at it this way, if I learned something new today, maybe I could help somebody else learn that thing tomorrow. And to paraphrase "Wayne’s World," I only hope you don’t think it sucks!

As much as we try to understand the importance of password security – whether it’s for a computer login, email account, network device, Wi-Fi or domain access – we don’t seem to meticulously implement it every time we set up or change a password. Password security is a popular topic for IT pros and end-users alike, which has remained a hot “good to know” topic, and not always an “I’ll do it right away” thing.

 

There’s yet another example of a password leakage debacle which reinstates our necessity to enforce stricter password security measures. During the pre-game coverage for NFL Super Bowl XLVIII, the stadium’s internal Wi-Fi login credentials were displayed on a big screen in their network control center which was revealed in a televised broadcast of video footage which showed the big screen and the password – unencrypted, in full visibility! It could, of course, be called an oversight; but when it comes to protecting IT assets and securing data, this is lack of due diligence on the part of the stadium’s IT security team. And they did not review the footage well enough before the telecast and tried to nip it in the bud.

 

Talking about password sharing, let’s discuss some best practices to ensure one, you build a strong password which is hard to guess, and two some things to remember about leaving your passwords accessible to the others.

 

Best Practices to Protect & Strengthen Your Passwords

Password Sharing Doesn’t Make You Noble or Kind: Never share your passwords with anyone unless you are absolutely certain there won’t be regrettable ramifications. You never know whether their system is compromised, whether they leave it written in the open, or they are a gullible social engineering target. Even if you have to share it for some reason, better change it immediately after their use with your login access is fulfilled.

  

Make Them Long, Make Them Strong: Longer passwords are difficult to guess especially if they are alphanumeric, includes special characters, and has a mixture of lowercase and uppercase characters.

  • Have at least 8 characters to make you password. The longer, the stronger.
  • Make passwords more complex and difficult to guess.
  • You can even use password generating software available online to spin up a strong string for your password
  • Do not give your biographical details such as name, date of birth, city in your password as they can also be easily guessed.
  • Try to ensure your passwords don’t contain any common words from the dictionary.

  

Strict No-No for Common/Same Passwords: A hacker has many devious ways such as brute force attacks to get into your system. Having common and same passwords for different sites and purposes is only going to make his life easier.

 

Not All Computers Are Your Friends: Keystroke logging (aka keyboard capturing) has become a common malware that finds entry into unprotected systems quite easily. You may never know it, but your key stokes could be captured as you type out your passwords. There are various types of keystroke capturing software that could swipe your passwords: hypervisor-based malware, API-based, kernel-based, form grabbing-based, memory injection-based and packet analyzers. Always remember to log out of your personal accounts when you are using someone else’s system.

 

Beware of the Eye of Sauran: We watchful of your immediate vicinity when you enter your password to a secure system related financial and other personally-identifiable information.

 

As Much As You Do, Your Passwords Too Need Change: It’s always best to change your password every once in a while, and not use an expired password for at least a year. Whether your system prompts you to or not, do make it a point to periodically change your password.

 

Don’t Make it to The Hackers’ Hall of Fame

Splashdata, a password management company, has released a list of "25 worst passwords of the year" for 2013 which was compiled using data that hackers have posted online (believed to be stolen passwords).

 

1)  123456

6)  123456789

11)  123123

16)  1234

21) password1

2)  password

7)  111111

12)  admin

17)  monkey

22)  princess

3)  12345678

8)  1234567

13)  1234567890

18)  shadow

23)  azerty

4)  qwerty

9)  iloveyou

14)  letmein

19)  sunshine

24)  trustno1

5)  abc123

10)  adobe123

15)  photoshop

20)  12345

25)  000000

 

Top 10 Password Preferences: The Weak & Common Themes

Google has released a list of password selection themes that were most popular based on a study consisting of 2,000 people to understand the procedures used to create passwords. Here are 10 most common and easy-to-break-in ones.

 

1) Pet’s name

6) Place of birth

2) Significant dates (like a wedding anniversary)

7) Favorite holiday

3) Date of birth of close relation

8) Something related to favorite football team

4) Child’s name

9) Current partner’s name

5) Other family member’s name

10) The word "password"

  

Yes, I agree periodic password change is a grind. To top that, you have to remember what you used earlier to not repeat it again. But it’s all worth the effort to manage and secure passwords, than to face the consequences of account breach, data theft and all the other fallouts. And, do ensure to protect your password and save it from those prying hacker eyes!

Filter Blog

By date:
By tag: