Geek Speak

April 2019 Previous month Next month

Back from Copenhagen and back writing the Actuator after Suzanne’s takeover last week. Thanks to everyone for their supporting comments, both public and private. I’ve got more travel coming up, so maybe we will have more takeovers. Stay tuned.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Tesla boom lifts Norway's electric car sales to record market share

One thing I noticed last week in Copenhagen was the quiet. Electric cars do more than cut down on fossil fuels; they also reduce noise pollution.

 

Amazon wants to launch 3,236 satellites so it can rain down internet from space

Pretty ambitious project for an online bookstore.

 

Yahoo tries again to settle lawsuit over massive data breach. This time it offers $118 million

This works out to about 4 cents per breached account. Sorry, not sorry, that’s the message being sent here.

 

Walmart says its new robots will make human employees happier

Looking forward to a kid hacking the robots and rolling back prices.

 

Internet Explorer security flaw allows hackers to steal files

The old “file association” trick that has been used for various exploits over the past 20 years. Microsoft doesn’t think this is critical, but you might want to push an update to your devices that removes the default association for now.

 

540 million Facebook records left exposed due to sloppy third-party developer security

I am really starting to believe that Facebook might not be good at security.

 

Facebook spent $22.6m to keep Mark Zuckerberg safe last year

Well, Facebook can secure and protect the privacy of at least one person. That’s a start, I guess.

 

One of many beautiful views from walking around Copenhagen last week. This is Nyhavn, a fancy word for "New Haven":

 

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article on hybrid IT and how to maintain visibility to performance. We’re also discussing leveraging automation to respond to threats.

 

Choosing which providers to use—let alone whether to choose a public or private cloud approach in the first place—can be paralyzing and confusing. In fact, 65% of respondents to the 2017 SolarWinds IT Trends Report stated that their agencies use up to three cloud providers.

 

Many agencies have embraced a hybrid IT model. Yes, they still have to choose which cloud providers to use and which applications to move offsite. Effectively overseeing a hybrid IT environment poses significant challenges, particularly regarding monitoring and security.

 

Expand the Visibility Horizon

 

IT managers may be accustomed to having unfettered authority over all aspects of their networks, but a hybrid IT environment often requires sharing those duties with cloud providers.

 

Plus, as packets pass between private and public clouds (and vice versa), there may be blind spots where administrators lose track of those packets. That can be nerve-wracking for security-conscious network administrators.

 

While many administrators have become highly adept at monitoring what is happening within their networks, a hybrid approach can require that they expand their horizons to see what is taking place beyond their own borders. What’s occurring with their information as it moves on-premises, in the cloud, and in between?

 

Automate and Virtualize Security to Proactively Respond to Threats

 

Being able to enforce security policies and ensure compliance across hybrid IT networks is another top security challenge. Agencies must ensure that their security protocols are being enforced across their networks and that data moving between private and public clouds is protected under those blanket policies. Policies must be automatically enforced as incidents arise, and administrators must be confident that any policies they make in-house are updated, applied, and enforced across the entire spectrum of their hybrid IT networks.

 

Employing automation to immediately address and remediate potential threats is also important. Hybrid IT networks can potentially include hundreds of applications. Automated monitoring and response can help identify and mitigate issues in minutes, rather than hours or days, minimizing risk and greatly reducing system downtime.

 

Agencies should also consider replacing legacy hardware systems, including firewalls, with next-generation virtualization solutions. Virtual firewalls can be far more scalable than traditional hardware and can be automatically deployed and configured. They can also help improve security responses and threat mitigation across the entire hybrid network. These updates be done in a piecemeal fashion, making it easier—and more budget-friendly—to manage the progression from legacy solutions to network modernization.

 

Although adopting a hybrid IT approach relieves administrators from having to choose between private or public clouds, an abundance of decisions remain. Administrators must commit to their cloud providers and decide which applications to keep on-premises, for example.

 

Fortunately, those choices don’t have to come at the expense of end-to-end network management and security for government agencies. Both of those can be well within reach, even in a hybrid IT world.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Hi there! While Tom is delivering a presentation for the Intelligent Cloud Summit here in Copenhagen, I've decided to get started on the Actuator for this week to free up his time for sightseeing later. My name is Suzanne--the other half of team LaRock-Larocque. The name thing is a long story. I'll save it for another time.

 

I'm not sure how Tom usually does things here, so bear with me. I'm sure he'll decide to scrap this whole thing anyway.

 

Lately, I've been really interested in sustainable architecture and design. I think it all started when I ordered a Dwell magazine subscription for a school fundraiser. Ever since, I've become more interested in the idea of building a Passivhaus of our own. And since we're in Copenhagen, it's the perfect time to learn about the building practices they're putting into effect here. Copenhagen has a goal to be the world's first carbon-neutral city by 2025. I was in NYC a couple weeks ago, and I know it's a much larger city, but it's noticeably more quiet here. Maybe it's the electric buses. Whatever they've put into place so far, it's working!

 

I love daydreaming about the type of house I would build. And in terms of design, my "Dream Spaces" Pinterest board is filled with all sorts of architectural styles. It's like I'm trying to channel my inner Chip and Joanna or Shea McGee. Speaking of Chip and Joanna--it's amusing how everyone who has had their home renovated by Fixer Upper just ends up putting it up on Airbnb. Apparently the "Fixer Upper effect" is alive and well in Waco, TX, and turns out to be a great investment.

 

Not sure about you, but our family is very excited for Avengers: Endgame. The Marvel movies came at just the right time for our kids. Watching them together as a family has been a great experience. But why is no one talking about Outlander getting picked up for its 5th and 6th seasons? I mean, let's be honest, Claire is a superhero. She travels through time, has executed more than one prison rescue, performs surgery in 18th Century conditions, and committed murder--but only when justified.

 

So far, I'm really enjoying Copenhagen. It's very walkable and has all the charm that a European city should. And evidence of hygge is everywhere--the blankets that are provided in outdoor café seating, the candles, and fireplaces. But more so, it's how laid back everyone seems to be here.

 

Tom is just about done with his talk so I'm going to wrap this up. We've got more hygge to experience! Suzanne out.

 

And now, there is one left standing alone having vanquished foe after foe after foe after foe after foe after foe. (Six match-ups in all)

 

In a battle of questionable skills and worthless superpowers, there was one gift that the community found more valuable than others, more useful than not.

 

An underdog of 140-character proportion capable of understanding before a word has been uttered.

 

Our Superhero of Uselessness, our Champion of Mediocrity…

 

TWEETZ!

 

From the moment TWEETZ entered the ring, we were enthralled with the idea of reading 140-characters of anyone’s thoughts. We believe that TWEETZ is the best of the worst, the most helpful of the least useful…

 

TWEETZ, today we celebrate you.

 

(If GIFTZ had won, we would have been able to bestow TWEETZ on everyone, but alas… the spoils of the bracket battle)

 

Thanks to all who voted and debated this year’s battle. What are your suggestions for year 8?

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article from my colleague Sascha Giese on how tech inventions from the public sector can be used to better the world. This has been true for years and remains true today.

 

According to a recent policy paper published by the U.K.’s Government Digital Service, “Technology innovation is vital for the public sector.” Having the latest artificial intelligence (AI) or cloud abilities can make a huge difference to the type and quality of work able to be produced by organizations such as the NHS and central government, including the potential to truly save lives.

 

There are already ways in which these technologies are being used in the public sector.

 

AI

 

One example of where AI is being implemented to benefit the public sector is in cancer screening. Researchers have developed a device that can use light to detect tiny electromagnetic changes present in tissue when cancer cells develop. This means patients don’t need to experience high levels of radiation when being diagnosed.

 

Cloud Computing

 

Government cloud solutions could benefit a range of organizations by providing flexible and cost-effective services and increasing the ability for collaboration between departments and organizations, leading to better services for the public. It also provides an essential foundation for adopting other technologies, such as AI and big data analytics, which are also in demand.

 

Big Data Analytics

 

Predictive crime-mapping is a big data technique beginning to be implemented by some police forces in the U.K. By using previous crime statistics, police officers can estimate the likelihood of a crime occurring in a given location at various times.

 

How to Turn Transformative Tech Into Typical

 

As these examples show, a range of technologies currently available can benefit society as a whole if they could all be implemented more widely in the public sector. These innovative technologies can not only save lives, but they can also help save money. The cost savings can be reinvested in other under-funded areas of the public sector, and they will increase efficiency and productivity, as employees save time on the tasks that these technologies are taking on.

 

However, more foundational work needs to be done to successfully integrate these exceptions into becoming the norm, not the exception. Taking the examples above. For the new AI device used for cancer screening to be rolled out across every hospital, the NHS would require increased investment in new skills for its employees to help them understand how AI can be used to the best of its abilities. And while the Cloud First policy is making great waves in central government, adoption is still not widespread, as many institutions still do not see the overall benefit. For the police, predictive crime mapping is only being used in Manchester and Kent, but the value that this technique could have if carried out in every U.K. police force would be immense. However, it would require more network coordination and insight sharing to become a reality.

 

It is clear there are challenges facing these organizations that are slowing down the integration process. But what are these, and how can they be overcome?

 

Understanding the Benefits

 

Despite the improvements that employees working directly with these transformative technologies are undoubtedly seeing, there can be a lack of understanding at a senior level around how investing in tech now will pay off in the long term. The senior staff responsible for determining new investments may benefit from considering the different developments that are being made, along with taking a step-by-step approach to implementation to ensure that the technology is being deployed in a way that creates the greatest good for the general public.

 

Employee Training

 

Across the public sector, there will be a need for increased training for employees to enable them to work with any innovative technologies before they can be introduced. While systems such as AI can take on many tasks themselves, it’s important to not forget the need for employees to monitor and manage these to receive the best results, as most AI still needs to be supervised.

 

Supporting Infrastructure

 

Having the right infrastructure in place is one of the most crucial aspects of tech integration. For an organization like the NHS, for example, being on a predominantly paper-based system means that some newer technologies would be more challenging to adopt, as the technology foundations should be in place before more varied systems are added. Ensuring aspects like the security, longevity, and manageability of a system from the design phase onwards will be critical to ensure that new technologies are deployed effectively and achieving the desired return on investment.

 

At the same time, these types of management tools will help overcome many of the early concerns that public sector leaders have about new initiatives, such as the ability to demonstrate value, or ensuring security and managing risk. As a result, choosing and deploying these types of tools should go hand-in-hand with new technology deployments to ensure that the public sector is achieving the greatest possible results.

 

Find the full article on IT Pro Portal.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Today’s data center is full of moving parts. If your data center is hosted on-premises, there’s a lot to do day-in and day-out to make sure everything is functioning as planned. If your data center is a SaaS data center hosted in the cloud, there are still things you need to do, but far fewer compared to an on-premises data center. Each data center carries different workloads, but there’s a set of common technologies that need to be monitored. When VM performance isn’t monitored, you can miss a CPU overload or max out memory. When the right enterprise monitoring tools are in place, it’s easier to manage the workloads and monitor their performance. The following is a list of tools I believe every data center should have.

 

Network Monitoring Tools

Networking is so important to the health of any data center. Both internal networking and external network play a key role in the day-to-day usage of the data center. Without networking, your infrastructure goes nowhere. Installing a network monitoring tool that tracks bandwidth usage, networking metrics, and more allows a more proactive or preventative approach to solving networking issues. Furthermore, a networking tool such as an IP management tool that stores all the available IP addresses and ranges and dynamically updates as addresses get handed out. This will go a long way in staying organized.

 

Virtual Machine Health/Performance Monitoring Tools

Virtualization has taken over the data center landscape. I don’t know of a data center that doesn’t have some element of the software-defined data center in use. Industry-leading hypervisor vendors have created so many tools and advanced options for virtualization that go above and beyond server virtualization. With so many advanced features and software in place, it’s important to have a tool that monitors not just your VMs, but your entire virtualization stack. Software-defined networking (SDN) has become popular, and while that might end up falling under the networking section, most of the SDN configurations can be handled directly from the virtual administration console. Find a tool that will provide more metrics than you think you need; it may turn out that you scale out and then require them at some point. A VM monitoring tool can catch issues like CPU contention, lack of VM memory, resource contention, and more.

 

Storage Monitoring Tools

You can’t have a data center without some form of storage, whether it be slow disks, fast disks, or a combination of both. Implementing a storage monitoring tool will help the administrator catch issues that slow business continuity such as a storage path dropping, a storage mapping being lost, loss of connectivity to a specific disk shelf, a bad disk, or a bad motherboard in a controller head unit. Data is king in today’s data center, so it’s imperative to have a storage monitoring tool in place to catch anomalies or issues that might hurt business continuity or compromise data integrity.

Environment Monitoring Tools          

Last, but definitely not least, a data center environment monitoring tool will keep you from a loss of hardware and data altogether. This type of tool will protect a data center against physical or environmental issues within the data center. A good environment monitoring tool will alert you to the presence of excess moisture in the room, an extreme drop in temperature, or spike in temperature. Monitoring tools usually come with a video aspect to monitor it visually, plus sensors installed in the data center room to monitor environmental factors. Water can do serious damage in your data center. Great monitoring tools will have monitors installed near the floor to detect moisture and a build-up of water.

 

You can’t be too careful when it comes to protecting your enterprise data center. Monitoring tools like the ones listed above are a good place to start. Check your data center against my list and see if it matches up. Today, there are many tools that encompass all these areas in one package, making it convenient for the administrator to manage it all from a single screen.

We have come to the end.

 

The battles have been hotly-debated, hard-fought (not really) contests pitting what is, quite honestly, a pretty silly set of questionable skills against one another to determine the MOST USEFUL of the Useless Superpowers.

 

We asked you, the community, to determine which Useless Superpower would be better than no superpower at all. Some of you voted with your hearts, some with your heads, and some with a commitment to fighting crime we had no idea you had!

 

  • USBz (57%) squeaked past BINARYz despite a compelling and passionate campaign for the 1s and 0s.
  • Our favorite play-in, TWEETz (78%), confidently moves into the final over DROPz who just didn’t have the strength to get to the end.           

 

From 33 to 2.

But, there can only be ONE!

 

VOTE NOW!

In my final post in this series, I want to talk about some things that we have not touched on already and how it relates to what we’ve already discussed. First, let’s take a quick recap of what we’ve covered.

 

In post 1, we talked about the pace at which people are moving to the public cloud. You need to do your homework before moving as it can be costly or it can be the wrong platform for the workload.

 

In post 2, we covered some of the positives and negatives with moving to the public cloud. We also considered that just because you start in one place, it doesn’t have to stay there for its whole lifespan.

 

Blog 3  discussed the current wave of digital transformation strategy that needs top-down direction. We started to look at agile deployment and paying off your technical debt.

 

Post 4 went on to look at the revolution happening with application deployment and how the use of new technologies, such as containers and microservices, can benefit your application.

 

Number 5 in the series was centered around how proper planning can help you achieve a private cloud and spending money won’t automatically solve all your problems. Metrics can be your friend, and recording them can help you with decisions you may have to make in the future and prove a solution you implement.

 

So, in this post, I also want to try to look to the future at things that may start to appear in more conversations once you have a solid hybrid cloud strategy. I’m hopeful you have figured out a method for using the various hyper-scalers and service providers, as well as online services and how you will adopt those moving forward (thankfully, no cloud project is a big-bang scenario, reorganizing everything in one hit). It’s a hard thing to do, especially if you have a traditional deployment model.

 

A more traditional approach to IT is deploying an application and, barring minor upgrades and support during its life span. It basically stays as it was on the day of deployment. Hence why we have these companies with applications stuck on an NT4 platform. A newer, more agile way of attacking the aging waterfall deployment is to constantly revisit and improve areas. As previously mentioned, this is one of the reasons why people who move to using the cloud for their applications are generally more successful. They respect the fact that there’s always room for improvement and that there are things to learn from getting it wrong.

 

For more traditional IT teams or those with possibly more regulation around their industry, we still see people turning to “cloud seeding.” Cloud seeding is generally the step before a proof of concept (POC): finding out your best option, discussing it with your board, and maybe researching what your competitors are up to, so that when the time comes, you can bring up a workload quickly. It can also be used to describe part of your disaster recovery methodology. These organizations have a lot of work to do to start using a hybrid cloud approach.

 

But there are other areas that may shape your business over the coming years. While you may not be initially planning to undertake any of these, having an understanding and opinion on whether they could fit into your organization will help you make a more strategic decision on the evolution of your hybrid cloud strategy. Many of these go hand in hand and hopefully they will spark some synergies.

 

The Internet of Things (IoT) seems to have been on the cusp of modern IT deployments for many years now, but it feels that in recent months, more and more conversations seem to include some form of IoT. I feel the term “things” doesn’t really do the clever piece of equipment justice at times. This device-to-device communication fueling IPv6 adoption is more about problem-solving than IT. Industrial use cases have been around for many years, and we’re seeing more and more uses spread into everyday business. From monitoring crowd movements, smart devices, wearables, to connected bins, IoT is being deployed in a vast range of opportunities to look at slow points or breakdowns in processes we can spot before they happen and be more proactive with our method of dealing with these critical business processes. We need to consider the security of the devices in question, the information gathered with the devices, and where this information should live.

 

Mobile edge computing is simply where some form of computation happens at the edge device or devices before sending it on to a cloud. A strong example of this is connected driverless cars. It’s impossible for them to transmit every bit of data back to the core. So, some analysis must be done on the device, as the bandwidth requirements for all the devices to upload and be received by the cloud in question would be phenomenal. Take the example of trying to find a needle in a haystack. Do you move the haystack or stacks back to the farm yard and sort through them? Or do you burn the hay where it sits and carry the needle home? If we look at power going into handheld devices today, it would be wise to start using some of this horsepower before sending data to a cloud or clouds for further use. Planning for this type of deployment and understanding of the different varying devices that may be in use is going to take a lot of planning and evolution over time.

 

Distributed ledgers—I’m not talking about cryptocurrencies, but their uses as immutable systems of record—have some very interesting use cases attached. The most interesting one I have heard of recently is around the fishing industry to show the traceability of where the fish were caught, by whom, where landed, etc., on the supply of this food resource. Whether in a private cloud or working in the public cloud space, this can have huge implications on how we do business in the future.

 

Serverless (the millennial way of expressing utility computing) is the idea of pay-as-you-go for a service or function. It removes the need for you to purchase and maintain the platform to carry out the required computation. I see areas where spikes and troughs in demand can really start to capitalize on this new type of deployment model, as you only pay for the code executed. So, if you have an average of 1,000 operations a day, but a ten-times increase at Christmas, you only have to pay for the computation during that season and don’t leave it sitting underutilized during the rest of the year.

 

Artificial intelligence (AI) is the shiny new thing—everyone wants to see what it can do for them. With various hyper-scaler players offering frameworks, you can easily implement some form of machine inference within minutes. There are many subfields to AI, including machine learning, deep learning, speech processing, neural networks, and vision, to name a few. Each of these share some common techniques but also have some radically different techniques to building knowledge.

 

Deep learning for security could be a way to penetration test. Imagining combining it with public key infrastructure for authenticating users and real-time inventory of devices to truly understand what’s going on within your environment so you can accurately secure it, including unmanaged devices.

 

Cyberterrorism, corporate espionage, and ransomware are on the rise. Password-cracking algorithms can easily break ten-character passwords. Using AI for white hat hacking is gaining a lot of traction. Joseph Sirosh, CTO of AI, Microsoft, recently said, “Artificial intelligence is the opposite of natural stupidity.” By using AI as a method to keep checks on your security model, you can improve your overall infrastructure, especially if you’re keeping it in the public cloud where it can be discovered and has an attack surface.

 

If you’re wondering about the fortifications you have built to keep intruders out, well, don’t forget the people who enter from 9 to 5, five days a week. There’s this scene in Eraser (1996) where our lead antagonists go back to where one of them worked to decrypt information she has removed for whistleblowing purposes. Even though the only apparent method to read the disk is via a highly secure vault, the designer left a back door. This highlights the human aspect that must be considered by security teams. I recently read about a university that’s using AI to try to spearfish their employees to make sure they’re following the most recent rules and guidelines, which shows just another one of the many possible uses for AI.

 

I understand that I’ve only briefly touched on some of the topics of thought that will follow from a hybrid cloud adoption, but it’s good to start thinking about the next big thing and how you can help moving your company forward. For some of these and other use cases, a properly adopted hybrid cloud strategy will be needed for this new venture to succeed.

 

Maybe you feel you have already adopted an updated hybrid cloud mode of IT deployment. Maybe you’ve relaxed your dress code for your development team (hoodies optional). Maybe you’ve smartened up the web front-end of your application from Java to HTML5. These changes may be the start of something bigger. You may be thinking about the different tools and methodologies, because hybrid cloud is here to stay, and fighting against it is like King Canute trying to halt the incoming tide. It's inevitable, so embracing and preparing for it seems like the smart move. It’s more about the processes implemented and how quickly you can deal with change that will put you, your team, and your department in better stead with your board, and, truth be told, make you a more successful business. It's about looking past the current challenges and deciding where you want to be in five years—and how best to deliver it. It's about keeping your options open and questioning those who say, “because it has always been done that way, it can’t change.” And with that, I wish you luck.

Welcome to April! We’ve hit the first quarter pole for the year. Time to check in with yourself on those 2019 goals you set back in January. There’s still time left in the year to get it done!

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

Drunk shoppers spend $48B per year while intoxicated, mostly on Amazon

I had a friend that once hit his head so hard he forgot ordering $800 worth of beer making supplies. So, yeah, blackout drunk shopping seems to be a thing, too.

 

Hackers Hijacked ASUS Software Updates to Install Backdoors on Thousands of Computers

It’s a practical attack vector. Instead of trying to hack thousands of computers, hack the method used to update those devices. I’m surprised ASUS did not have adequate threat detection in place.

 

Office Depot fined millions for tricking customers into believing their PCs were infected with malware

At least ASUS wasn’t deliberately trying to scam customers like Office Depot. This is why we can't have nice things.

 

European parliament votes to scrap daylight saving time from 2021

Just when you thought time zone debates couldn’t get any worse. Buckle up for a whole new round of “use UTC for everything” discussion threads.

 

Study shows programmers will take the easy way out and not implement proper password security

Honestly, they could have just stopped at “developers tend to take the easy way out.” Like DBAs, good developers will follow instructions and get the job done in the fastest way possible. For decades, this meant security comes last.

 

Analysis: Why Americans Shouldn’t Feel Grateful For $137 Insulin

Healthcare in this country is broken. Everyone knows this, and there seems to be no fix in sight.

 

How to Deliver Constructive Feedback in Difficult Situations

Longer read but worth your time. Think about the conversations you have in the office, and in meetings. Then, consider how to make changes in your communication techniques for the better.

 

Pork belly topped with an egg is my favorite breakfast:

 

Cloud migration follows the same principles as any other complex task. With methodical planning and appropriate preparation, it can be a smooth and painless experience. While “divide and conquer” can have negative connotations, it’s a popular technique in computer science and can be applied to program management equally.

 

Assuming a migration was planned as such and all workloads are in the cloud now, does that mean the job is done? Absolutely not! There are several tasks that should immediately follow, some of which should become iterative processes going forward.

 

Closure

Immediately after migration, it could be all high-fives on a job well done. However, it’s crucial to do a clean-up immediately after, including shutdown and removal of any redundant systems, severing of superfluous network connectivity, and switching to production status of relevant applications on the cloud platform.

 

These actions are extremely important from an operations standpoint. The cost of running those extra resources is one thing, but those systems are easily forgotten over time. While they exist, they only cause operational confusion and security issues. Action should be taken to do a controlled but prioritized decommissioning of such systems and connectivity.

 

Evaluation

Once closure has happened and the platform is stable, evaluate if the migration was successful and achieved all it was set out to do.

 

The first point of reference is the business case that was formed as part of the project initiation phase where all stakeholders agreed it makes sense to migrate to a cloud platform. The other one is the list of KPIs (key performance indicators) that were defined as part of the audit that was defined just before carrying out the migration.

 

The latter is tangible proof of what was gained from the whole exercise. You should be careful that the measurements are “like for like” and the target objects will exist in their current form post-migration when defining the metrics, so there’s no confusion. Such evaluation and documenting before and after the migration is important as it keeps the team honest about their goals and any decisions made. At the same time, it also makes success undeniable.

 

Costs

Sizing of machines is one area where you should err on the side caution and oversize. That combined with the fact that in the early days applications tended to live in both environments, increases the running costs in the short term. Once the dust has settled, it is time to focus on optimizing for costs. Most cloud platforms offer discount pricing for infrastructure and native tools to determine where such savings can be made.

 

This is a quick and easy win. Furthermore, it’s easy to identify machines that will be required permanently and are good candidates for discount pricing by committing a certain amount to the vendor. In some cases, further savings might be possible by standardizing certain types or family of instances. A review should be done every year to determine how many resources can benefit from such discounts.

 

Refactoring

Speaking of cost optimization, another way is to refactor applications so that they can benefit from on-demand resource provision such as “function as a service” architectures or even stateless applications where infrastructure can be deployed for the duration of the job and destroyed thereafter.

 

Major cost efficiencies can be found using cloud-native technologies and methodologies. The best part is that due to the nature of the public cloud, such refactoring can be done in isolation and tested at full-scale in parallel.

 

Small and manageable improvements can be made over time to gain those efficiencies and that work should become part of an ongoing process to improve applications.

 

Security

In the coexistence phase during migration, security policies have to allow traffic and system access between applications on both environments. Those policies need to be reviewed immediately after decommissioning of the “legacy” side of the application.

 

There could be a tendency to wait until all legacy applications are decommissioned but by doing so, you’d be introducing a security risk for the duration of the migration. While it can be a tedious process, any security breaches will end up consuming even more time, so it’s best dealt with as soon as possible. The security review shouldn’t be limited to just the legacy workloads. Security for cloud platforms is very different from traditional platforms, and migration provides an opportunity to review the capabilities available and take advantage where possible.

 

Center of Excellence

The migration team goes through many highs and lows during this process. That not only improves bonding but also develops a lot of skill and knowledge.

 

That knowledge is priceless, and it would be a shame if that team disbands completely and goes back to their “day job.” Technical members of the team will likely continue with the new platform, but other members will also have precious knowledge of the whole journey and shouldn’t be ignored. It makes sense to preserve that knowledge and experience by keeping the team as the “Center of Excellence” for the long term. It should reserve time and meet on a regular basis for strategic discussions and decisions going forward.

 

Conclusion

Migration to the cloud is no mean feat, but once achieved, it opens up so many possibilities to morph the infrastructure and tools into something in line with modern-day architecture.

 

This list is by no means exhaustive but does give a good start. As cloud technologies and skills to use them develop, only the sky is the limit—no pun intended!

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article on defending against foreign hackers. If you give an IT pro the right training and tools, there’s no question that they’ll be better prepared.

 

Cybersecurity has become a hot topic of conversation. From Facebook to DHS, both private and public organizations have become extraordinarily cognizant of the potential threats posed by external hackers.

 

Last year, Director of National Intelligence Dan Coats sounded the warning bells. In his Worldwide Threat Assessment of the U.S. Intelligence Community, Coats wrote:

 

“The potential for surprise in the cyber realm will increase in the next year and beyond as billions more digital devices are connected—with relatively little built-in security—and both nation states and malign actors become more emboldened and better equipped in the use of increasingly widespread cyber toolkits.”

 

Meeting these challenges may be difficult, but not impossible.

 

By focusing on people, technology, and planning, federal network administrators may get a better handle on their networks, while strengthening security policies that can keep the bad actors at bay.

 

Hire the Right People, and Train the Ones Already in Place

 

Hackers are smart. They learn from being deterred.

 

It’s extremely important that agency personnel are continually trained about hackers’ latest exploits. This knowledge can be critical to detecting and reacting to potential threats.

 

Agencies should make investing in ongoing security education and training a top priority.

 

IT teams should also proactively use and scour all resources at their disposal—including social media channels, networking groups, and threat feeds—to keep up to speed on hacker activity, malware, and more.

 

Arm Employees With the Proper Tools

 

Today’s defense agencies are dealing with massive amounts of data, thousands of connected devices, and private, public, and hybrid cloud infrastructures. Manual monitoring approaches and traditional tools will likely be ineffective in these environments.

 

Effective federal security and network monitoring go hand-in-hand with solutions that can automatically scan and respond to potential anomalies, wherever they may be. For example, if an application becomes compromised, it can be difficult to trace the problem back to its source, particularly if that application exists within a hybrid IT environment.

 

Teams need tools that provide deep visibility into the entirety of their networks, so they can locate and quickly correct the issue before it becomes a critical problem. Agencies also need a means of tracking devices as they appear on their networks. If a rogue or unauthorized device attempts to access the network, administrators can track it directly to its user.

 

That user could be a member of a foreign hacking group, or a bad actor who obtained a DoD employee laptop that may have been erroneously left behind. Without the proper tools in place, there may be no way to know, and certainly no way to immediately block the device or shut down network access privileges.

 

Develop—and Continuously Update—Security Strategies

 

A strategy shouldn’t simply be bullet points in an email, but a well-formulated plan that outlines exactly what steps should be taken in case of a breach.

 

The security strategy should also be continually updated. Threats do not stand still; neither should security plans.

 

In addition to their daily checklist of action items (log reviews, application patching, etc.), IT teams should plan on testing and updating their security procedures on a regular cadence—annually, at minimum, if not more frequently.

 

By building a powerful combination of the right people, the right tools, and the right strategies, defense agencies will be well equipped to combat these new threats.

 

Find the full article on American Security Today.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Each day brings us one step closer to crowning this year’s winner. We are down to our final four, and another round of favorites have seen their runs come to an end. With the exception of just one match-up, the superpowers moving on won their spots with significant margins of victory.

 

  • Generosity (Giftz) was just barely muscled out by Dropz, which makes sense since it is only temporary strength. Apparently, just enough to eek out a win by 1%. (51% to 49%)
  • These match-ups continue to provide moments of massive meta-goodness as 2ndz lives up to its namesake. USBz will move on to the next round. (66% to 34%)
  • The 1s and 0s (Binaryz) rolled through this round leaving Tumblz in the dust. (63% to 37%)
  • Our favorite play-in, Tweetz, keeps up its omniscient, 140-character inspired march to the end. Hoverz crashes to the ground in this round. (75% to 25%)

 

We have two more rounds to go before we can declare one of these final four the BEST of the Worst.

 

Check out the next round and vote, campaign, and debate to determine our finalists!

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.