1 2 3 4 Previous Next

Geek Speak

1,614 posts

Help Desk and Service Desk are names that are often interchanged in IT. But, have you ever thought, what does Help Desk and Service Desk meanand how do they differ?


Confused ??


images.jpg












 

In this article, I’ll take you through the different characteristics of Help Desk and Service Desk, and by analyzing the discussed points, you will be able to understand which tool best suits you and your organization.



Helpdesk:


A helpdesk is useful when a service needs to be provided for customers (mostly IT users and employees, both inside and outside of an organization). This service needs a multi-threaded troubleshooting approach to maximize its benefits.


Helpdesk provides:


Case management - Ensures all tickets are addressed and resolved in a timely manner. Further, this helps prevent tickets from being overlooked by technicians.

Asset management - Keeps asset information up-to-date. In turn, making it easier for technicians to quickly address and complete tickets. Moreover, you gain access to knowledge base articles related to the issue raised.

SLA’s - These are associated with help desk, but are not business oriented. Rather, they are technology focused. For example, SLA’s like - % of space left in the storage.


Service Desk:


Service Desk comes into the picture when technicians or business administrators need service and or coordination in the IT department. Any business that might integrate with IT for their business needs will then likely need Service Desk in place. In a nutshell, service desk is the point of contact for all IT-related requests and is much more complex and integrated than help desk.


Service Desk provides:


Incident management - Helps customers get back on track (in a productive state), as soon as possible. This can be done by passing information, work around for the issue, or by resolving the issue.

Problem management - Helps customers to identify the root cause of a reported issue.

Change management - Helps making orderly changes to IT infrastructure.

Configuration management - Helps to organize the assets change requests in organizations. The service desk also maintains the Configuration Management Database (CMDB), where all configuration data is saved.

Service desk also helps in the design, execution and maintenance of SLA’s (Service Level Agreement) and OLA (Operating Level Agreements).


So, which is best for you?  Take a look below at a few more things to consider when choosing Help Desk or Service Desk:

 

 

Help Desk

Service Desk

1

To provide excellent technical support.

To provide excellent incident management.

2

To solve problems efficiently and effectively.

Customer to be back up and running to productivity ASAP.

3

To have a single point of contact for all technical issues.

Have ITIL framework for IT processes.

4

IT infrastructure changes to happen in an informal process.

IT processes to be formally defined.

5

Keep track of the IT assets in the network.

Keep track of configuration changes and the relationships among them using CMDB.

6

Define to customer how you will solve the issue.

Define only SLA and OLA to customers.


blago2.png












Well, I hope this article helped you get a better understanding on which solution is best for you or your organization. Also, please feel free to fill in more interpretations about the differences between Help Desk and Service Desk in the comments below.

Security for mobile devices in the federal space is improving, but there’s still a lot of ground to cover

 

In June, we partnered with Market Connections to survey decision makers and influencers within the US federal IT space. While the report covers several areas, and I strongly recommend having a look at the whole survey, the section on mobile device usage caught my attention.

 

Observations

What is clear from the report is that mobile devices and the security issues that come with them are being taken seriously in federal IT—or at least some sectors. The problem is that “some sectors” is not nearly enough.

 

Let’s take one of the first statistics on the survey—permitting personal devices to access internal systems.

 

The good news is that 88% of the respondents said that some form of restriction is in place. But that leaves 11% with more or less unfettered access on their personal mobile devices. Eleven percent of the federal government is a huge number and there is almost no way to look at that 11% and derive a situation that is acceptable in terms of risk. And that’s just the first chart. Other areas of the survey show splits that are closer to 50-50.

 

For example, 65% of the respondents say that data encryption is in place. That means 35% of the mobile devices in the field right now have no encryption. Honestly, for almost all of the points on the survey, nothing less than a number very close to 100% is acceptable.

 

Recommendations

What can be done about this? Obviously this one essay won’t untie the Gordian Knot in one fell swoop, but I am not comfortable throwing my hands up and saying “yes, this is a very bad problem.” There are solutions and I want to at least make a start at presenting some of them.

 

Let’s start with some of the easy solutions:

 

Standard Phones

Whether or not phones are agency-provided, there should be a standard setup for each of them. Employees can provide their own phone as long as it’s on a short list of approved devices and allow for provisioning. This helps agencies avoid having devices on the network with known security gaps and allow certain other features to be implemented, like:

Remote Wipe

Any phone that connects to government assets should be subject to remote-wipe capability by the agency. This is a no-brainer, but 23% of respondents don’t have it or don’t know if it is in place (trust me, if it was there you’d know!).

VPN

Every mobile device that connects to federal systems should be set up with a VPN client, and that client set to automatically activate whenever a network is connected—whether wifi or cellular. Not only would this help with keeping the actual traffic secure, but it would force all federal network traffic through a common point of access which would allow for comprehensive monitoring of bandwidth usage, traffic patterns, user access, and more.

 

Security Training

The number one vector for hacking is still, after all these years, the user. Social engineering remains the biggest risk to the security of any infrastructure. Therefore before a device (and the owner) is permitted onto federal systems, the feds should require a training class on security practices—including password usage, two-factor authentication, safe and unsafe networks to connect on (airports, café hotspots, etc.), and even how to keep track of the device to avoid theft or tampering.

 

All of those options, while significant, don’t require massive changes in technology. It’s more a matter of standardization and enforcement within an organization. Let’s move on to a few suggestions which are a little more challenging technically.

 

Bandwidth monitoring

Note that this becomes much easier if the VPN idea mentioned earlier is in place. All mobile devices need to have their data usage monitored. When a device suddenly shows a spike in bandwidth it could indicate someone is siphoning the data off the device on a regular basis.

 

Of course, it could also mean the owner discovered Netflix and is binge-watching “The Walking Dead.” Which is why you also need…

 

Traffic Monitoring

Quantity of bytes only tells half the story. Where those bytes are going is the other half. Using a technique such as NetFlow, makes it possible to tell which systems each mobile device is engaged in conversations with—whether it’s 1 packet or 1 million. Unexpected connection to China? That would show up on a report. Ongoing outbound connections of the same amount of data to an out of state (or offshore) server? You can catch those as well.

 

The key is understanding what “normal” is—both from an organizational perspective and for each individual device. Ongoing monitoring would provide both the baseline and highlight the outliers that rise up out of those baseline data sets.

 

User Device Tracking

User Device Tracking correlates where a device logs on (which office, floor, switch port, etc.) along with the user that logs in on that device to build a picture of how a mobile device moves through the environment, who is accessing it (and accessing resources through it), and when it is on or off the network. Having a UDT solution in place is for more than just finding a rogue device in real-time. Like traffic monitoring, having a steady and ongoing collection of data allows administrators to map out where a device has been in the past.

 

In Closing

The mobile device section of the federal survey has much more useful information, but these recommendations I’ve raised—along with the reasons for why it is so desperately important to implement them—would, if executed, provide a higher level of security and stability. I hope you’ll take a moment to consider them for your organization.

Full survey results: http://www.solarwinds.com/assets/industry/surveys/solarwinds-state-of-government-it-management-and-m.aspx

mrrobot.jpg

First off, I’d like to introduce myself. My name is Damon, and I am the Managing Editor of Geek Speak. I’m not sure why it has taken me so long to post a blog. But that said, I’d like to thank you all for your continued support of thwack, Geek Speak, and for our amazing contributors.

 

While I’m not exactly versed in IT, I do respect all the various aspects of IT. However, I have always seemed to gravitate more towards Security. I am amazed at the number of security breaches that occur every year. There have been many notable breaches, but I remember firsthand experiencing the PlayStation® Network hack and it being down for an extended period of time (Yeah life was rough that weekend…no Destiny!).

 

On that note, I’d like to discuss a show I recently started watching, “Mr. Robot.” To be honest, I was skeptical when I heard about it, mainly because shows or movies like this always seem forced or just unrealistic. For example, “Blackhat” was alright, but come on really…Thor as a hacker?!

 

Okay, alright let’s get back to Mr. Robot, which stars Rami Malek as the lead character, Elliot Alderson. In short, Elliot is a Cyber Security Engineer by day and is plagued with a severe social anxiety disorder. So, as you can imagine, he struggles with interacting and relating to most people. But, by night he connects with the rest of the world—as he hacks them and assumes the life of a cyber-vigilante.


(WARNING: SPOILER ALERT EP. 1)


Without throwing too many spoilers out there, I will only address episode 1. In this episode, we are introduced to Elliot in a dimly lit café on a cold evening. Elliot frequently goes to this café to be alone and well because, as he states, they have “remarkably fast Wi-Fi.” However, this particular evening he has a different reason for visiting.

 

As he arrives, he sits down by the owner, alerting him that he’s going to expose him and his underground business ring to the authorities. The owner then becomes unruly and tells him to leave. Elliot calmly speaks to him and explains that just because he uses Tor, it doesn’t mean his traffic is private. He further states, “You know, whoever controls the exit nodes controls the traffic.” Immediately, the owner becomes frantic and offers Elliot money, but Elliot says it’s not about the money. With a little deductive reasoning, I’m sure you can figure out what happens next. Naturally, I was intrigued…I thought ok, they are talking technical here and it’s pretty suspenseful at that—pretty good start.

 

Now to the second part of the episode. The company Elliot works for is hit with a DDoS attack. Ultimately, he stops the attack, but in the process he receives a message from a hacker group called “fSociety.” This group is one of many hacker groups that are collectively planning a digital revolution by deleting debt records from one of the largest corporations in the world. The problem is, they need Elliot to achieve their end goal. Annnnd…that’s where I’m going to stop. Sorry, you’ll have to tune in yourself to see what’s been happening in the latest episodes.

 

So, this leads me to my main question, could this fictional story really happen today? And, if you’ve been tuning in, what do you think so far? Are some of the hacks or scenarios that occur realistic/possible? Let us know or just leave your thoughts on the latest episode—Tune in Wednesdays, 10/9C on USA.

On occasion considerations will be made regarding the the migration of applications to the cloud. There are may reasons why an organization might choose to do this.

 

Full application upgrades, and the desire to simplify that process moving forward. The desire for licensing centralization, and rollout of future upgrades could be a sure cause of functionally moving application sets to the cloud. A big motivator is the End of Life of older versions of operating systems requiring upgrades. Corporate acquisitions and the goal of bringing in a large workforce, particularly a geographically diverse one, into the organization’s structures can be a large motivator.

 

Above all, the desire to provide a stability in hardware redundancies can be a motivator.

 

When a company has desire to augment the internal datacenter with additional function, redundancy, hardware, uptimes, etc., the movement of applications to a hybrid cloud model can be the ideal solution to get them to that application nirvana.

 

To move a legacy application to a hybrid model is a multi-layered process, and sometimes cannot be done. In the case of legacy, home-grown applications, we often find ourselves facing a decision as to complete rewrite or retrofit. In many cases, particularly arcane database cases in which the expertise regarding design, upgrade, or installation has long since left the organization, a new front-end, possibly web-based may make the most sense. Often, these databases can remain intact, or require just a small amount of massaging to make them function properly in their new state.

 

Other applications, those more “Information Worker” in focus, such as Microsoft Office already have online equivalents, so potentially, the migration of these apps to a cloud function is unlikely to be as much of a problem as a complete reinstallation, or migration. Components like authentication have been smoothed out, such that these functions have become far more trivial than they were at inception.

 

However, as stated above, there are a number of high visibility and highly reasonable purposes for moving apps to the cloud. The question has to be which approach is most ideal? As I’ve stated in many of my previous posts, the global approach must be paramount in the discussion process. For example, should there be a motivation of application upgrade in the future, the goal may be to create a new hosted environment, with a full-stack install of the new version code, a cloning/synchronization of data, then whatever upgrades to the data set requisite. At that point, you have a new platform fully functional, awaiting testing, and awaiting deployment. This method allows for a complete backup to be maintained of a moment in time, and the ability to revert should it become necessary. At that point, another consideration that must have been entertained is how to deploy that front end. How is that application delivered to the end-users? Is there a web-based front end? Or, must there be fat applications pushed to the endpoints? A consideration at that point would be a centralized and cloud based application rollout such as VDI.

As you can see, there are many planning considerations involved in this type of scenario. Good project management, with time-stream consideration will ensure a project such as this proceeds smoothly.

 

My next installment, within the next couple of weeks will take on the management of Disaster Recovery as a Service. 

Moving your databases to the cloud clearly isn’t for everybody however there definitely are implementations where that makes sense. Smaller businesses and start-ups can often save considerable capital outlay by not having to invest in the hardware and software required to run an on-premise database. In addition, to cost saving there can be several other advantages as well. The cloud offers pay-as-you-go scalability. If you need more processing power, memory or storage you can easily buy more resources. Plus, there’s the advantage of global scalability and built-in geographical protections. By its very nature cloud resources can be accessed globally and most cloud providers have built-in geographical redundancy where your storage is replicated to secondary regions that could hundreds or thousands of miles away from your primary region. Finally, the cloud offers simplified operations. Business don’t have to worry about hardware maintenance or software patching. The cloud provider takes care of all of those basic maintenance tasks.

 

Choosing a cloud destination

If you’ve decided to that a move to the cloud might pay off for your business it’s important to realize that not all cloud databases are created equal. There are essentially two paths for running your databases in the cloud. The main database cloud providers used by most business today are Amazon RDS and Microsoft Azure.  When you go to deploy a database in the cloud you can use an Infrastructure as a Service (IaaS) approach where you run your database inside a virtual machine (VM) that hosted by a cloud provider or you can use Database-as-Service approach where you use the database services directly. An example of an IaaS implementation could be an Azure VM running Windows Server and SQL Server. An example of the DBaaS approach is Azure SQL Database. The trade-off between the two essentially boils down to control You have more explicit control over the OS and the SQL Server settings in an IaaS VM implementation and less control over these type of factors in a Azure SQL Database implementation.

 

SQL Server - Azure Migration Tools

After you’ve decided to go with either a DBaaS or an IaaS type of cloud implementation then the next step is to pick the tools that you will need to migrate your data to the cloud. For a IaaS SQL Server implementation you can use the following tool:

 

  • Deploy a SQL Server Database to a Windows Azure VM wizard -- The wizard makes a full database backup of your database schema and the data from a SQL Server user database. The wizard also creates an Azure VM for you.

 

If you are migrating to an Azure SQL Database then you can consider the following migration approaches.

 

  • SQL Server Management Studio (SSMS) – You can use SSMS to deploy a compatible database directly to Azure SQL Database or you can use SSMS to export a logical backup of the database as a BACPAC which can be imported by Azure SQL Database.
  • SQL Azure Migration Wizard (SAMW) – You can also use SAMW to analyze the schema of an existing database for Azure SQL Database compatibility and then generate a T-SQL script to deploy the schema and data.

 

Up in the Clouds

How you actually get to the cloud depends a great deal on the type of database cloud implementation that you choose. If you want more control then the IaaS option is best. If you want fewer operational responsibilities then the DBaaS option might be the better way to go.

 

Digital Attack Map December 25, 2014.jpg


“DDoS trends will include more attacks, the common use of multi-vector campaigns, the availability of booter services and low-cost DDoS campaigns that can take down a typical business or organization” - Q1 2015 State of the Internet Security Report

 

“Almost 40% of enterprises are completely or mostly unprepared for DDoS attacks”. - SANS Analyst Survey 2014

 

 

In Christmas 2014, Microsoft’s Xbox Live and Sony’s Playstation Network were hit by massive DDoS attacks by hacking group Lizard Squad. Xbox Live and Playstation Network were down for 24 hours and 2 days, respectively. Online gamers were not happy, I’m sure.

 

Earlier this year, GitHub, the largest public code repository in the world, was intermittently shut down for more than five days. The DDoS attacks were said to link to China’s “Great Cannon”.

 

We’ll never stop hearing new victims (or old ones) that are crippled by the distributed denial of service (DDoS). In fact, every new security report states record-breaking number of DDoS attacks compared to the previous one. The latest data shows that there is an increasing number of Simple Service Discovery Protocol (SSDP) attacks. I found this scary - any unsecured home-based device using Universal Plug and Play (UPnP) Protocol can be used for reflection attacks.

 

Did your company/organization suffer from DDoS?

 

How do your organization detect DDoS threats?

 

What DDoS mitigations do your organization implement?

 

 

INFRASTRUCTURE MONITORING AND DETECTION

Studies found that majority of the DDoS attacks were volumetric attacks at the the infrastructure layer. Firewalls, IPS/IDS, NGFW, IP reputation service, should be deployed in the defense-in-depth manner not only to protect an organization’s network perimeter against DDoS, but also to detect any inside-network infected device to launch DDoS against within or outside the organization. NetFlow or any flow-based technology is indispensable to provide visibility of any network abnormality.

 

APPLICATION MONITORING AND DETECTION

Application firewalls, host-based IPS/IDS, application delivery controllers (ADC) provide up to Layer 7 visibility and protection against malicious traffic. And most importantly, don’t forget to patch your systems.

 

SECURITY INFORMATION AND EVENT MANAGEMENT (SIEM) AND HUMAN ELEMENT

You feed all the logs, flow data, packet captures, etc. to SIEM, then what? I believe that SIEM is not SIEM without the human element. Even though vendors include many pre-built alerts/reports in SIEM, it’s human that fine-tune to fit an organization’s needs; a lot of man-power. Also, who say that DDoS won’t start from 2AM in the morning? Therefore, 24x7 coverage (think of NOC) is necessary.

 

THIRD PARTY PROVIDER

Recently, we were approached by one of our service providers; they provide Security Operations Center (SOC) services to customers. In other words, they give customers 24x7x365 SIEM coverage. Service providers can also provide automatic DDoS mitigation, upstream blackholing, or even global content delivery network (CDN) services.

 

DDOS FIRE DRILL

Just like organizations performing disaster recovery tests annually or twice a year, annual DDoS tests should be conducted. All IT departments will get familiar with the DDoS incident handling. Also, the organization’s DDoS mitigation weakness can be revealed and improved.

 

 

In the ‘80s Sci-Fi movie WarGames, there was scene in which big monitors in situation room showed traces of global missile attacks. Do you want to see something similar in real life? OK. OK. No missile attacks. Check out the following websites for a taste of current cyberattacks in real time.

 

Cyber Threat Map from FireEye

IPViking Map from Norse Corp

Digital Attack Map from Arbor Networks and Google

I’ve really enjoyed watching (and re-watching, a few times) this video SolarWinds made in honor of SysAdmin day (SysAdmin Day Extreme). What appeals to me – besides the goofy sound effects and scenarios (Seriously, guys? “Parkour Password Reset”?!?) is the underlying premise – that sysadmins are adrenaline junkies and our job is a constant series of RedBull-fueled obstacles we must overcome. Because even though that doesn’t match the reality in our cubicle, it is often the subtext we have running through our head.

 

In our heads, we’re Scotty (or Geordi) rerouting power. We’re Tony Stark upgrading that first kludged-together design into a work of art. We’re MacGuyver. And yeah, we’re also David Lightman in “War Games”, messing around with new tech and unwittingly getting ourselves completely over our head.

 

As IT Professionals, we know we’re a weird breed. We laugh at what Wes Borg said back in 2006, but we also know it is true: “…there is nothing that beats the adrenaline buzz of configuring some idiot’s ADSL modem even though he’s running Windows 3.1 on a 386 with 4 megs of RAM, man!”.

And then we go back to the mind-numbing drudgery of Windows patches and password resets.

 

I’ve often said that my job as a sysadmin is comprised of long stretches of soul-crushing frustration, punctuated by brief moments of heart-stopping panic, which are often followed by manic euphoria.

 

In those moments of euphoria we realize that we're a true superhero, that we have (as we were told on Saturday mornings) "powers and abilities far beyond those of mortal man."

 

That euphoria is what makes us think that 24 hour datacenter cutovers are a good idea; that carrying the on-call pager for a week is perfectly rational; that giving the CEO our home number so he can call us with some home computer problems is anything resembling a wise choice.

 

So, while most of us won't empty an entire k-cup into our face and call it "Caffeine Overclocking", I appreciate the way it illustrates a sysadmin's desire to go to extremes.

I also like that we've set up a specific page for SysAdminDay and that along with the video, we've posted links to all of our free (free as in beer, not just 30 day demo or trial) software, and some Greeting Cards with that special sysadmin in your life in mind.

 

Oh, and I've totally crawled through a rat’s nest of cables like that to plug something in.

 

What are some of your "extreme SysAdmin" stories?

Bronx

Windows 10: An Honest Review.

Posted by Bronx Jul 31, 2015

Okay, I'm sure you've read my thrashing concerning Windows 8 and the slight pep I had towards Windows 8.1. Now we have Windows 10. I'll keep this brief. (Probably not.)


The good:

  • Total download and installation time? About 2.5 hours. Once installed, nothing was lost or unfamiliar – unlike Windows 8. Classic Shell was promptly uninstalled – by me.
  • Installation and upgrading was near flawless. Kudos to the Dev team for that! (Only hiccup was a false message saying I was low on memory, which I'm sure an update will rectify posthaste since this is not the case.)
    mem.jpg
  • Compatibility was not an issue. Everything works thus far.
  • Start menu is back; however, at the last minute, Microsoft decided to hide/remove a needed Start menu tweaking tool which would enable the user to re-arrange the Start menu. (Removing options is bad. See my review of Windows 8.) To their credit, some advanced tweaks to the Start menu can be performed through a quick Google search and simple instructions. (Normal tweaks are obvious.)
  • Minimize, Maximize, and Close buttons are back. Also the snap-to feature has been enhanced – can do quadrants now.
  • Multiple virtual desktops. Good, I guess. Although if you're organized, one is sufficient.


The bad:

  • After using the available tweaks, the Start menu is still...not like I want it to be – better than it was in Windows 8 and closer to the Windows 7 experience, just not perfect.
  • A computerized narrator to “help” me after installation was quickly quashed. (Think Clippy from Word, circa 2003.)


Indifferent:

  • New, redesigned menus/options for the fingers abound, but some of the old looking stuff lingers (which I prefer). Some older menus are colorful and mouse-centric, while newer menus are flat, gray, and designed for finger tapping. Very Visual Studio 2012-esque. It also looks as though time was the enemy, meaning the Dev team did not have time to upgrade deeper menus/options because while some option windows look much different, other options still have the Windows 7 look and feel. Whatever. Consistency is a nice to have.
  • I would add the new version of IE to the Bad list, but I use Chrome. IE (or whatever the new name is) is still...intrusive – requiring too much input for functions that should be automated. Microsoft, focus on the OS...the rest of the world will take care of the apps. Sorry to be so blunt but, well, not really.

 

Edge:

Edge is the new version of IE. Personally, I have not used IE since IE 7. Edge, I'm sorry to say, is still IE. I like the minimalist approach, but the engine needs to go.

    The Good: Looks better and cleaner, offers browsing suggestions, provides the ability to share pages.

    The Bad: Still IE, meaning certain pages do not render properly, as they do in Chrome. (Chrome does not add "hidden" padding to Div tags, among others, where IE and Firefox do.)

    Conclusion: Try it. I still prefer Chrome. This is a personal choice akin to your favorite beer - both will get the job done, but what flavor do you prefer?

 

Talking to your computer:

Cortana. Good idea, but you're not ready. (No one is really.) Few people use Siri (iphone) or Google voice (Android). I do use the Amazon Echo, however. The difference? People are programmed to type with a computer. Being quiet while doing anything on a computer invokes a sense of privacy. Talking to a machine that is primarily used for typing and mousing is unnatural. The Echo is new and can only be used via speech. The transition is more natural, although odd at times. I know Microsoft feels a need to catch up – and that's fine. Just do it in the lab until you're ready; nay, better!

 

Summary:

Microsoft is trying to compete in the tablet market, and rightly so. I applaud capitalism; however, they are losing, and that's okay. I do have some firm opinions which may clear things up.


Microsoft, you are a computer company primarily focused on computer operating systems. Focus on that. If you want to enter the tablet market with a tablet OS, great! But know your audience. Your audience is overwhelmingly PC users who work, not tablet users who play.


If you insist on adding tablet features, keep them independent of the PC features – even on the same device. Meaning, do not add tiles (tablet features) to the Start menu (PC feature). If you must enter the tablet domain, you should have a simple tray icon to toggle between PC and tablet modes (work and play modes, respectively). That's it. Go from work to play mode in one click/tap. Bam! That would make all the difference and crush the competitors IMHO. (Although, I would rethink the tiles and replace them with something less intrusive and more customizable.)


Advice:

Continue improving Windows until we get to that Minority Report hologram computer. Make Windows more functional, more seamless, more integrated, easier to use, and with lots of cool options. (Options are HUGE.) Once you nail that, then add speech and speech recognition (when that technology is near flawless). BTW, we all kill the bloatware. Rely on yourself, not the ads of others please.


Bottom line:

Good and worth the price (free). I would pay the $119 asking price too if I were assured the Start menu could be tailored to my needs. Nice comeback. Beer's on me, but just the first.

DanielleH

5 Deadly SysAdmin Sins

Posted by DanielleH Jul 30, 2015

If a bunch of SysAdmins all went out for a beer (or two) after work, there is a 99% chance they would swap user stories from their “hall-of-shame” collections. It’s hard to believe this day in age that they still receive the deer in the headlights look when they ask a user, “what version of Windows do you have?” At the end of the day, we’re all human and make mistakes or ask dumb questions (gallery of them here), and some might even call it job security for IT pros.

 

I personally enjoy hearing a SysAdmin’s worst moments, flops, and face palms in their career because there’s nothing like learning through failure. In honor of SysAdmin Day, I’m sharing 5 Deadly SysAdmin Sins some of us might be guilty of and hopefully you got away with it – some of us aren’t always so lucky! Feel free to add on.

 

5 SysAdmin Sins

1) Thinking the problem will just go away

2) Blaming the blue screen of death before it becomes a real issue

3) Rebooting the firewall during business hours

4) Upsetting the president/ceo's secretary (they wield a big stick)

5) When all else fails, blame the network (as long as you're not the network guy too)

Shadow IT refers to a trend where users adopt IT tools and solutions outside of the knowledge or control of the official IT department. If the IT department is aware or has policies that allow systems which they don’t manage to be used, then it’s not shadow IT, but if IT doesn’t know about it and offers a comparable service then it is. For example, most IT departments are responsible for providing email. If a user chooses to use Gmail or some other email provider, then IT isn’t able to manage the risk of corporate data getting lost or stolen, email spam, or phishing attacks.

 

The use of shadow IT can be hard to detect. Although many agencies have network policies blocking certain sites or types of traffic, the sheer quantity and diversity of the services available can easily overwhelm an already overworked IT department. So why should they even bother? If the user is able to find a solution that works on their own, more power to them, right? Unfortunately, it’s not that easy. When users circumvent IT, then something goes wrong – the services goes down, they lose data that was only hosted there, someone steals their credentials, and copies all of the sensitive data – they look to IT for help. This leads to conversations like, “I know I’m not supposed to do this, but will you please help me make sure nobody else was able to access those files on Dropbox?”

 

The Threat
From our recent State of Government IT Management and Monitoring Survey, the primary concern regarding the use of shadow IT is security issues. And the use of shadow IT is in full force, with 90% of respondents seeing shadow IT being used in their environment today and 58% expect to see it continue to be used.

Shadow IT Use.png

 

Not only was shadow IT not a top focus area, it actually ranked at the bottom, with only 12% saying it was very important (versus 72% indicating cyber security was very important). Given that 90% of federal and civilian agencies believe shadow IT is in use in their environment, it’s the second ranking area that IT has the least control over, and the highest negative consequences of shadow IT are security issues – it’s shocking that shadow IT isn’t getting more focus.

 

How to respond
To create a strategy for managing shadow IT, you need to understand why your users are looking to it. Even in networks with no direct connectivity to the Internet, computers systems and critical data can easily be misused and the risk for comprise is real. To manage all of these risks, you need to understand why your users go around you and make it easier for them to work with you instead.

 

From the survey, we saw that the IT acquisition process is the main trigger for shadow IT, followed by perceived lack of innovation by the IT department. Of course, there is a long tail of other reasons and you should survey your users to understand exactly why they are using systems outside of you purview and specifically what those systems are.

Perceptions Triggering Shadow IT.png

 

One of the questions we strove to unravel during this survey was what to expect in the future, and as it turns out, there is a lot of confusion around what should be done about shadow IT as a whole. About a quarter of those surveyed believe it should be eliminated, another quarter thinks it should be embraced and the remaining half were somewhere in between.

Shadow IT Preferences and Protection.png

Although this split may appear to be conflicting, it actually makes sense. Some environments are too sensitive to tolerate any IT services that are not strictly controlled by IT. However, in many agencies, particularly civilian ones, the IT department has an opportunity to identify ways of providing better service to their customers by understanding why their users are looking elsewhere. Once a system, service, or tool has been evaluated by IT and put on the acceptable list, it’s no longer considered shadow IT. If IT can leverage these opportunities, they might be able to both deliver better service and create more productive relationships in their agencies.

 

What is clear, however, is that the more visibility you have in to your environment, the more confident you will be in your ability to protect your agency against the negative consequences of shadow IT.

 

Full survey results:

http://www.solarwinds.com/assets/industry/surveys/solarwinds-state-of-government-it-management-and-m.aspx

Ten years ago IT was a very different space for SysAdmins than it is now, and I too remember the day when server sprawl needed wrangling (hello virtualization), work silos were everywhere (what is that storage guy's name again?), and datacenter space was in short supply (maybe something's never change).

 

As I read through comments on the “Gearing up for SysAdmin Day 2015: The Ever-changing role of the System Administrator” post, some key skills for SysAdmins come to mind that are going to be critical in the future.

 

Business understanding

 

Gone are the days where SysAdmins sat in their lair and controlled their domain from behind a monitor and phone. There are times where the business is looking to the SysAdmin to help the company move forward and grow. SysAdmins today should look to have a seat at the table when the company is planning long-term business strategy. Making sure business planning is coupled with IT planning will ensure long-term success. In order to do this, SysAdmins should be able to articulate how IT affects the company's business and bring new ideas and technologies to the table. Understanding new technologies lead to the next point around learning.

 

Learning new skills and technology is critical

 

Any SysAdmin worth their weight in gold keeps a constant eye on technology advances in the industry. Now we all know keeping an eye and actually learning about new technology are two different things. However, as technology advances and changes keep increasing, SysAdmins need to recognize that ongoing learning is critical. Learning how to do things easier and helping the business run more efficient is a key thing for all SysAdmins, no matter their experience. For years when IT learning would come up, leadership would tune out, that cannot be the case anymore. SysAdmins have to be a champion of change and make sure they pick up the skills needed to help their company.

 

Being purposeful

 

Finally, SysAdmins need to be purposeful.  In an ever changing world, having a direction is critical to keep up (let alone staying ahead). From new technologies, to new ways of doing things (or going back to an old way), to changing business requirementsa SysAdmins world is all about change. Having a clear understanding of where you want to take your IT department, your solutions, and your career is key to long-term success. Here are some questions to help you:

 

  • Where is the company heading?
  • How can IT play a part in that success?
  • What new technologies can help ensure success?
  • What skills are needed?

 

As in most roles in business, the SysAdmin role is evolving and changing. Business understanding, learning new skills, and being purposeful are just a few skills needed today and tomorrow. However, these are core skills that can help SysAdmins grow and advance for the long-term. 

 

link.png

 

*P.S* – Celebrate the small things. You won’t get credit 365 days a year, but there is one day your hard work doesn’t go unnoticed – SysAdmin Day, y’all! It’s this Friday and naturally, we’re pre-gaming with some fun freebies. Come celebrate >>

Last time I mentioned that I would take a look into some of the major cloud providers' database as a service offerings in order to show how each one works and how it is setup. With that said, I decided to start with the offering from Microsoft's Azure cloud. Below I will take you through creating a database and then how to connect to it.

 

Creating an Azure SQL Database

 

I figured since the entire process takes less than 5 minutes to do I would just create a short video on how this works. You can check it out here.

 

Since I could not add audio at the time I recorded it, I will describe what happens in the video.

 

  • Basically we just log into the Azure Portal, so the biggest prereq is that you have an Azure account, if you do not it takes just a couple minutes to get going.
  • Once logged in we click on the SQL Databases tab and click Add at the bottom of the page.
  • Once we do that we get some options for creating our database, I picked Custom Create because I wanted to show you the most options.
  • Next we name our database as well as pick how big we think we want it to be, don't worry too much here... we can scale this up or down later if we need to. Lastly I pick new server to put this database on, if you wanted to add this database to an existing server you can do that too.
  • Next enter login credentials for our database server and pick where we want the new server to be created geographically. We can also pick whether we want all Azure VM's to be able to connect be default, as well as if we want to use the latest version of SQL.
  • When we click finish the database server is created and then the database, this takes about a minute, but then you will see a status of "Online" on the database.

 

After this process we are ready to connect to our new database!

 

Connecting to your Azure SQL Database

OK now that we have a database in the cloud, we need to connect to it. I will use an Azure virtual machine, but anything from anywhere can connect to an Azure DB provided you allow it on the security settings.

Things we will need for this step include:

  • Username you created before
  • Password you created before
  • Server Name (and FQDN)
  • A server using ODBC (or something else compatible) to connect to the Azure SQL server

 

OK let's get started. I am using an Azure VM for this demo, but there is no reason why you couldn't connect from anywhere in the world. Just make sure to see my TIP section toward the end of the article for details on security settings you will need to change.

1.png

After logging in to your server you will need to prepare your server just like any other SQL server... So go install the SQL Native Client so that we can connect from the ODBC Data Source Administrator.

 

2.png

Once SQL Native Client is installed click on Add and select it as the new data source.

 

Now before I go any farther in the wizard on our client let me show you where to get the server name for your database.

3.png

In the Azure portal click on SQL Databases and then take a look at the list of databases. In this example we want to connect to the thwack database. So if we look in the server column we see the servers hostname. That name will be used to finish out our FQDN, which is simply <servername>.database.windows.net. You can also click on the database to see more detailed information and the FQDN is listed on the right hand side too.

 

OK back to the Client wizard.

4.png

How that we know the FQDN we can enter a connection name and the server name into the wizard, then click next.

 

5.png

Next we enter the login credentials that we created when we created the database and database server. If you don't remember then already you can use the Azure portal to reset the password and determine the login name.

 

6.png

SNAG!!!! Yes I really did get this message. But if you actually read it, which I don't typically do (who needs a manual right!), you will see that it gives you a big hint... use username@servername instead of just username. Presumably this is because I am using an older SQL native client.

 

7.png

So I added in @servername to the Login ID field and presto! I was able to login.

 

8.png

Just like any other SQL server you can then also select a database to make the default. I like to use th drop down to verify that the ODBC client can actually see the databases on the server. Here we can select our thwack database so we know its working now!

 

9.png

There you go! all finished with a new cloud hosted database, connected to a cloud server... ready to be used.

 

Tip: Connecting from outside of Azure

One thing I didn't show you so far was the Security section of the Azure DB settings. Because I used an Azure VM I did not need to modify any of the security settings because Azure DB understands that the traffic was coming from a (fairly) trusted source. Now in order to connect from a server at your office or something similar you will need to login to Azure Control Panel, navigate to your database server that contains the database, and then click on "Configure". Then on the Configure page you will see a list of IP addresses, or at least a box where you can put some in, this is where you will enter the Internet IP of your server. Once you do, make sure to click Save at the bottom of the page, and now you should be able to connect from that IP address.

10.png

 

Backup/Restore and Replication

 

So this part is always important no matter where your database lives. Azure DBaaS makes this really simple, in fact your backups are done automatically and are completed on a schedule based on your database type, so make sure to check that schedule and if it doesn't fit your needs you may need to scale the database up in order to get more backups. To restore you simply go to the database in question and at the bottom of the portal there is a restore button.

1.png

Once you click that button you will see a restore settings box.

 

2.png

In this box you can specify a database name for the restore to go to as well as the restore point... it has a DVR like slider... my database just hasn't been online very long so i don't have a long history.

 

3.png

At the bottom you can monitor progress on the restore. For my blank database it took about 5 minutes for it to restore.

 

4.png

Once it is restore it will appear in the database list ready to be connected to.

 

On the replication side things are a little big more involved, but still not too bad.

From the database dashboard we click on Geo-Replication. Then at the bottom we can select Add Secondary.

2.png

Click the Add Secondary button to get started.

 

3.png

All we need to do is configure the type of replication by selecting the secondary type (read only or non readable), as well as where we want it replication to geographically, and finally the target SQL server... I had already created a SQL server in the West Region, but if you haven't just select New Server and it will run you through the SQL server wizard just like we did earlier.

 

4.png

Confirm that they can bill you more

 

5.png

And there you go! SQL replication in place with a pretty darn respectable RPO! Check here for details.

 

So what did you think?

Well that wasn't too bad was it? I have to admit that I've used Azure's DBaaS before, so it was familiar to me and getting another DB setup for this post was super quick... except for the ODBC error I encountered ... but the error was descriptive enough that it didn't take too long to fix the issue.

 

So making it that easy to get a database is a great thing but also possibly a bad thing! This is exactly the kind of thing that causes shadow IT, but you can prevent this and still provide the same easy and speed if you have the Windows Azure Pack implemented.

 

So here are the homework questions:


How long does it take to get a database at your IT shop?

 

Have you ever found someone leveraging Azure DBaaS in a shadow IT scenario due to ease of use?

 

Are you using the Microsoft Azure Pack to build an internal DBaaS offering?

speeding-car.png

 

Like many of you, I suspect, I am firmly under the thumb of my corporate security group. When they want to know what’s going on on the network I’ll put in as many taps as they want; but sometimes they want more than that, especially where NAT is involved and IPs and ports change from one side of a device to the other and it’s otherwise difficult to track sessions.

 

 

Speedometer.png

High Speed Logging

 

Have you heard of High Speed Logging (HSL)? I’d like to say that it’s a single standard that everybody plays by, but it seems like more of a concept that people choose to interpret in their own way. HSL is the idea of logging every single session, and in particular every single NAT translation.

 

So who offers HSL? Well, in theory if you can log session initiation to syslog, you can do High Speed Logging, but some vendors explicitly support this as a feature. For example:

 

 

 

road-closed.png

Potential Road Blocks

 

High Speed Logging sounds simple enough, but there are a few things to consider before configuring it everywhere.

 

 

 

  1. What is the impact on your device when you enable High Speed Logging?
  2. Is it possible that the volume of logging messages generated might exceed the capacity of your device’s management port? One solution to this is to use an in-band destination for HSL instead.
  3. With multiple devices logging at high speed, can your management network cope with the aggregate throughput?
  4. Once you have the logs, how long do you have to keep those logs? Do you have the necessary storage available to you?
  5. For those logs to be useful, any system that is doing analysis, filtering or searching of the logs is going to have to be fairly fast to cope with the volume of data to look through.
  6. Will a DDoS attack also DoS your logging infrastructure?

 

 

speedy-snail.pngHow Powerful Is Your Engine?

 

The question of the whether a device is capable of supporting HSL should not be underestimated. In one company I worked at, the security team logged every session start and end on the firewalls. To accommodate this the firewall vendor had to provide a special logging-optimized code build, presumably at the cost of some throughput.

In another case I’ve seen a different firewall vendor’s control CPU knocked sideways just by asking it to count hits on every rule, which one would imagine should be way less intensive than logging every session.

 

 

magic-roundabout.jpgNavigation

 

Assuming you manage to get all those logs from device to log server, what are you going to do with them? What’s your interface to search this rapidly growing pile of log statements–undoubtedly in different formats–in a timely fashion? Are you analyzing real time and looking for threats? Do you pre-process all the logs into a common, searchable format, or do you only process when a search is initiated?

 

 

rocket-bike.jpgBicycles Are Nice Too

 

I suppose the broader question perhaps is whether High Speed Logging is actually the right solution to the problem of network visibility. Should we be doing something else using taps or SPAN ports instead? Having worked in an environment that logged every session on firewalls, I know it is tremendously useful when troubleshooting connectivity issues, but that doesn't make it the best or only solution.

 

 

What do you think? Have you tried High Speed Logging and succeeded? Or failed? Do you think your current log management products (or perhaps even just the hardware you run it on) are capable of coping with an onslaught of logs like that? What alternatives do you use to get visibility? Thanks in advance for sharing your thoughts!

In our second visit back to our 2015 pro-dictions, let’s take a look at the evolution of the IT skill set. It appears Patrick Hubbard was spot-on in his prediction about IT pros needing to broaden their knowledge bases to stay relevant. IT generalists are going the staying with what they know best, while IT versatilists and specialists are paving the way to the future.

PHUB_update.jpg


Earlier this year, kong.yang wrote a blog addressing this very topic, which he dubbed the Stack Wars. There, he lightly touched on generalists, versatilists, and specialists. Read the following article to get a deeper look at each of the avenues IT pros can pursue: “Why Today’s Federal IT Managers May Need to Warm Up to New Career Paths.”

 

Fans of spy fiction (of which there are many in the government ranks) might be familiar with the term “come in from the cold.” It refers to someone who has been cast as an outsider and now wishes to abandon the past, be embraced, and become relevant again.

 

It’s a term that reflects the situation that many federal IT managers find themselves in as government agencies begin focusing on virtualization, automation and orchestration, cloud computing, and IT-as-a-Service. Those who were once comfortable in their position as jacks-of-all-IT-trades are being forced to come in from the cold by choosing new career paths to remain relevant.

 

Today, there’s very little room for “IT generalists.” A generalist is a manager who possesses limited knowledge across many domains. They may know how to tackle basic network and server issues, but may not understand how to design and deploy virtualization, cloud, or similar solutions that are becoming increasingly important for federal agencies.

 

And yet, there’s hope for IT generalists to grow their careers and become relevant again. That hope lies in choosing between two different career paths: that of the “IT versatilist” or “IT specialist.”

 

The IT Versatilist

An IT versatilist is someone who is fluent in multiple IT domains. Versatilists have broadened their knowledgebase to include a deep understanding of several of today’s most buzzed-about technologies. Versatilist can provide their agencies with the expertise needed to architect and deliver a virtualized network, cloud-based services, and more. Versatilists also have the opportunity to have a larger voice in their organization’s overall strategic IT direction, simply by being able to map out a future course based on their familiarity surrounding the deployment of innovative and flexible solutions.

 

The IT Specialist

Like versatilists, IT specialists have become increasingly valuable to agencies looking for expertise in cutting edge technologies. However, specialists focus on a single IT discipline. This discipline is usually directly tied to a specific application. Still, specialists have become highly sought-after in their own right. A person who’s fluent in an extremely important area, like network security, will find themselves in-demand by agencies starved for security experts.

 

Where does that leave the IT generalist?

Put simply – on the endangered list.

Consider that the Department of Defense (DoD) is making a major push toward greater network automation. Yes, this helps takes some items off the plates of IT administrators – but it also minimizes the DoD’s reliance on human interference with its technologies. While the DoD is not necessarily actively trying to replace the people who manage their networks and IT infrastructure, it stands to reason that those who have traditionally been “keeping the lights on” might be considered replaceable commodities in this type of environment.

 

If you’re an IT generalist, you’ll want to expand your horizons to ensure that you have a deep knowledge and expertise of IT constructs in at least one relevant area. Relevant is defined as a discipline that is considered critically important today. Most likely, those disciplines will center on things like containers, virtualization, data analytics, OpenStack, and other new technologies.

 

Whatever the means, generalists must become familiar with the technologies and methodologies that are driving federal IT forward. If they don’t, they risk getting stuck out in the cold – permanently.

 

**Note: this article was originally published in Technically Speaking

 

There’s no doubt that over the past couple of years the cloud as gone from a curiosity to a core component of many companies IT organizations.  Today Amazon AWS and Microsoft Azure are well known commodities and the cloud-based Office 365 has proven to be popular for businesses as well as well as consumers.  Today it’s even common for many business applications to be cloud based. For instance, SalesForace.com is a popular Software-as-a-Server (SaaS) application and many organizations have moved to cloud-based email. However, one notable holdout has been database applications. While there certainly are cloud-based database options business have been more than a little reticent to jump abroad the cloud for their databases.

 

Why the reluctance to move to the cloud?

 

The fact of the matter is that for most organizations their relational databases are the core of their IT infrastructure. Business critical applications are built on top of those databases and availability is paramount. While businesses can tolerate some downtime in their email downtime connectivity problems with their relational databases are unacceptable.  While there’s no doubt that internet and cloud connectivity is better than at any point in the past it this past year’s well publicized outages have shown that it’s far from perfect.

 

And then of course there are the control and security issues. Many organizations are just uncomfortable moving their data off premise. While the security of most cloud providers exceeds the average IT organization putting your critical data into someone else’s hands is a matter a trust that many organizations are not willing to make. When you data is on-premise and backed up you know you can restore it – the control remains within your own organization.  That’s not the case if your data is in the cloud. There you need to depend on the cloud provider

 

Another key issue for many international organizations is data sovereignty. In many countries like Canada, businesses are required by law to keep their data within their countries borders. In the past this has been a hurdle for many cloud providers as cloud servers could be located anywhere and they are not typically aligned with national boundaries. This is beginning to change as some cloud providers are beginning to support national data boundaries.

 

Where Cloud Database Fit Best?

 

So does this all mean that databases will never make to the cloud? The answer is clearly no. While established medium and enterprise sized businesses may have reservations about moving to the cloud, the cloud can make a lot of sense for smaller business and startups. Using the cloud can result in considerable capital savings for new businesses. SMB that may be faced with considerable hardware upgrade costs could also find the cloud to be a compelling alternative. Just as important is the fact that cloud database move many of the maintenance tasks like patching and upgrades into the hands of the cloud vendor freeing the business from them.

 

The move to cloud databases is far from inevitable but cost and labor savings make it a compelling option for new businesses and SMBs. In the next posting I’ll look at what happens if you do make that jump to the cloud.

 

Filter Blog

By date:
By tag: