Geek Speak

14 Posts authored by: arjantim

In my last couple of posts, I’ve tried to paint a picture of the past and present of database management and monitoring. We've seen that good database performance has everything to do with knowledge, which, as we all know, is power. In our case, it is important to continue adding knowledge by learning from the mistakes we make.


In my opinion, we’re at the start of yet another revolution and it's being brought about by blockchain. And while that revolution is going to change a lot of the things we’re taking for granted these days, the digital revolution is still going strong. IT keeps evolving, and with things like IoT, AI, and ML, the digital revolution has gained a new dimension. It will take databases even further, and make them increasingly complex. This is why it will be so important to know and learn from everything going on in our databases today.


IT has always been a quickly evolving project. Previously, the first major leaps in human history took centuries, while these days revolutions seem to follow each other in a matter of decades. We’ll never stop taking our inventions and pushing them a step further.


You get the point. It is critically important that we know what is going on with our databases, which steps we need to take to give our companies an edge on the competition. We need to be able to see this before it even happens, by correlating all data points we measure on our databases. We need to have a tool that lets us discover the root causes of the complex issues we sometimes face in the database world.  And we need it in real-time.


Thanks for reading! Here is my last cartoon for now.


Invention Fire Samsung Galaxy Note 7

The title of this post is a quote that might be (or not be) from Albert Einstein. I first thought the title of the post would be something like:” why root cause analysis is your saviour”, but I think knowledge/experience title has more potential, so I’ll stay with Albert…


I’ve seen my fair share of companies collecting loads of data on incidents happening in their database environments, but using this “knowledge” and use it to come out better experienced seems to be a hard thing to establish.


What Albert means is that we need to make sure that the only way to make things better is by learning from our mistakes. Mistakes will happen and so will incidents, there is nothing we can do to prevent that from happening. And to be honest Albert (and me ;)) wouldn’t want you to stop making mistakes because without mistakes innovation would come to an end. And that would mean that this is it, nothing will ever be better than what we have now. No. We need innovation. We need to push further and harder. In the database world, I mean.

einstein quote


To do so, we’ll have incidents. But if we perform root cause analysis and learn from the incidents happening and try to figure out what we can do to prevent the same problem from ruining our day ever again.


The right tool to perform root cause analysis will provide you all the information needed. We all want to end up like the kid in the picture below, right? Although…. Albert and me might say the only way to innovate is by making mistakes, AND learn from them.


In two weeks I’ll write another post in which I’ll look back at the post from the last couple of weeks and what tools I think are essential for a good performing database environment, now and in the future. In the meantime, I love the comments you all provide.  I’ll try to answer as many as possible in due time.

In the last two posts, I talked about databases and why network and storage are so darn important for their well-being. These are two things that a lot of companies are struggling with and in some ways, seem to have mastered over the years in their own data centers. But now it seems they will be obsolete when things like cloud are quickly becoming the new normal.


Let’s leave the automotive comparison for now and let’s concentrate on how cloud strategy is affecting the way we use and support our databases.


There are a lot of ways to describe the cloud, but I think the best way for me to describe it is captured in the picture below:

As Master Yoda explains, cloud is not something magical. You cannot solve all of your problems by simply moving all your resources there. It will not mean that once your database is in the cloud that your storage and network challenges will magically vanish. It is literally just another computer that, in many cases, you will even share with other companies that have moved their resources to the cloud as well.


As we all know, knowledge is power. The cloud doesn’t change this. If we want to migrate database workloads to the cloud, we need to measure and monitor. We need to know the impact of moving things to the cloud. As with the Force, the cloud is always there ready to be used. It is the how and when that will impact the outcome the most. There are multiple ways to leverage the cloud as a resource to gain an advantage, but that doesn’t mean that moving to the cloud is the best answer for you. Offense can be the best defense, but defense can also be the best offense. Make sure to know thy enemy, so you won’t be surprised.


May the cloud be with you!

Last time I told you guys I really love the Ford story and how I view storage in the database realm. In this chapter, I would like to talk about another very important piece of this realm, The Network.


When I speak with system engineers working in a client's environment, there always seems to be a rivalry between storage and network regarding who's to blame for database issues. However, blaming one another doesn’t solve anything. To ensure that we are working together to solve customer issues, we need to first have solid information about their environment.


The storage part we discussed last time is responsible for storing the data, but there needs to be a medium to transport the data from the client to the server and between the server and storage. That is where the network comes in. And to stay with my Ford story from last time, this is where other aspects come into play. Speed is one of these, but speed can be measured in many ways, which seems to cause problems with the network. Let’s look at a comparison to other kinds of transportation.


Imagine that a city is a database where people need be organized. There are several ways to get the people there. Some are local, and thus the time for them to be IN the city is very short. Some are living in the suburbs, and their travel is a bit longer due to having a further distance to travel, with more people traveling the same road. If we go a bit further and concentrate on the cars again, there are a lot of cars driving to and from the city. How fast one comes to or from the city depends on others who are similarly motivated to get to their destination as quickly as possible.  Speed is therefore impacted by the way the drivers perform and what happens on the road ahead.


Sheep Jam


The network is the transportation medium for the database, so it is critical that this medium is used in the correct way. Some of the data might need something like a Hyperloop to travel back and forth over medium-to-long distances, while other data may have enough for those shorter trips.


Having excellent visibility into the data paths to see where congestion might become an issue is a very important measurement in the database world. As with traffic, it gives one insight into where troubles could arise, as well as offering the necessary information about how to solve the problem that is causing the jam.


I don't believe the network or storage is responsible. The issue is really about the how you build and maintain your infrastructure. If you need speed, make sure you buy the fastest thing possible. However, be aware that what you buy today is old tomorrow. Monitoring and maintenance are crucial when it comes to a high performing database. Make sure you know what your requirements are and what you end up getting to satisfy them. Be sure to talk to the other resource providers to make sure everything works perfectly together.


I'd love to hear your thoughts in the comments below.

I’ve always loved the story about the way Henry Ford built his automotive imperium. During the Industrial Revolution, it became increasingly important to automate the construction of products to gain a competitive advantage in the marketplace. Ford understood that building cars faster and more efficiently would be hugely advantageous. Developing an assembly line as well as a selling method (you could buy a Model-T in every color, as long as it was black.) If you want to know more about how Ford changed the automotive industry (and much more), there is plenty of information on the interwebs.


In the next couple of posts, I will dive a little deeper into the reasons why keeping your databases healthy in the digital revolution is so darn important. So please, let’s dive into the first puzzle of this important part of the database infrastructure we call storage.


As I already said, I really love the story of Ford and the way he changed the world forever. We, however, live in a revolutionary time that is changing the world even faster. It seems -- and seems is the right word if you ask me -- to focus on software instead of hardware. Given that the Digital Revolution is still relatively young, we must be like Henry and think like pioneers in this new space.


In the database realm, it seems to be very hard to know what the performance, or lack thereof,  s and where we should look to solve the problems at hand. In a lot of cases, it is almost automatic to blame it all on the storage, as the title implies. But knowledge is power as my friend SpongeBob has known for so long.

Spongebob on knowledge

Storage is an important part of the database world, and with constantly changing and evolving hardware technology, we can squeeze more and more performance out of our databases. That being said, there is always a bottleneck. Of course, it could be that storage is the bottleneck we’re looking for when our databases aren’t performing the way they should. But in the end, we need to know what the bottleneck is and how we can fix it. More important is the ability to analyze and monitor the environment in a way that we can predict and modify database performance so that it can be adjusted as needed before problems occur.


Henry Ford was looking for ways to fine-tune the way a car was built, and ultimately developed an assembly line for that purpose. His invention cut the amount of time it took to build a car from 12 hours to surprising two-and-a-half hours. In a database world, speed is important, but blaming storage and focusing on solving only part of the database puzzle is sshort-sighted. Knowing your infrastructure and being able to tweak and solve problems before they start messing with your performance is where it all starts. Do you think otherwise? Please let me know if I forgot something, or got it all wrong. Would love to start the discussion and see you on the next post.


To the cloud and beyond

Posted by arjantim Sep 20, 2016

The last couple of weeks we talked about the cloud and the software defined data center, and how it all comes into your it infrastructures. To be honest I understand a lot of you have doubts when talking and discussing cloud and SDDC. I know the buzzword lingo is strong, and it seems the marketing teams come up with new lingo every day. But all-in all I still believe the cloud (and with it SDDC) is a tool you can’t just reject because of looking at it as a marketing term.


One of the things mentioned was that the cloud is just someone else’s computer and that is very true, but saying that is forgetting some basic stuff. We have had a lot trouble in our datacenters, and departments sometimes needed to wait months before their application could be used, or the change they asked for was done.


Saying your own datacenter can do the same things as the AWS/AZURE/GOOGLE/IBM/ETC datacenters can do, is wishful thinking at best. Do you get your own CPU’s out of the Intel factory, do you own the microsoft kernel, and I could continue with much more you will probably never see in your DC. And don’t get me wrong, I know some of you work in some pretty amazing DC’s.


Let’s see if we can put it all together and come to a conclusion most of all can share. First I think it is of utmost importance to have your environment running at a high maturity level. Often I see the business running to the public cloud and complaining about their internal IT because of lack of resources and money to perform the same levels as in a public cloud. But throwing all your problems over the fence into the public cloud won’t fix your problem. No it will probably make it even worse.


You’ll have to make sure you’re in charge of your environment before thinking of going public, if you want to have the winning strategy. For me the Hybrid cloud or the SDDC is the only true cloud for much of my customers, at least for the next couple of years. But most of them need to get their environments to the next level, and there is only one way to do that.


Know thy environment….


We’ve seen it with outsourcing, and in some cases we are already seeing it in Public Cloud, we want to go in but we also need the opportunity to go out. Let’s start with going in:


Before we can move certain workloads to the cloud we need a to know our environment from top to bottom. There is no environment where nothing goes wrong, but environments where monitoring, alerting, remediation and troubleshooting is done at every level of the infrastructure and where money is invested to keep a healthy environment, normally tend to have a much smoother walk towards the next generation IT environments.


The DART framework can be used to get the level needed for the step towards SDDC/Hybrid Cloud.


We also talked about getting SOAP, Security, Optimization, Automation and Reporting to make sure we get to the next level of infrastructure, and it is as much importance as the DART framework. If you want to create the level IT environment you need to be in charge of all these bulletpoints. If you are able to create a stable environment, on all these points, you’re able to move the right workloads to environments outside your own.

Screen Shot 2016-09-20 at 20.24.28.png


I’ve been asked to take a look at Solarwinds Server and Application Monitor (SAM) 6.3 and tell something about it. For me it is just one of the tools you need in place to secure, optimze and automate your environment and show and tell your leadership what your doing and what is needed.


I’ll dive into SAM 6.3 a bit deeper when I had the time to evaluate the product a little further. Thanks for hanging in there, and giving all those awesome comments. There are many great things about Solarwinds:


  1. 1. They have a tool for all the things needed to get to the next generation datacenter
  2. 2. They know having a great community helps them to become even better


So Solarwinds congrats on that, and keep the good stuff coming. For the community, thanks for being there and helping us all get better at what we do.

In the last couple of years the business is constantly asking IT departments how the public cloud can provide services that are faster, cheaper and more flexible than the in house solutions. I’m not going to argue with you if this is right or not, it is what I hear at most of my customers and in a couple of cases the answer seams to be automation. The next-gen data centers that leverage a software-defined foundation, use high levels of automation to control and coordinate the environment, enabling service delivery that will meet business requirements today and tomorrow.


For me the software-defined data center (SDDC) provides an infrastructure foundation that is highly automated for delivering IT resources at the moment they are needed. The strength of  SDDC is the idea of abstracting the hardware and enabling its functionality in software. Due to the power of hardware these days, it’s possible to use a generic platform with specialized software that enables the core functionality, whether for a network switch or a storage controller. Network, for example, were once specialized hardware appliances; today, they more and more are virtualized within the virtual with specialized software. Virtualization has revolutionized computing and allowed flexibility and speed of deployment. In the IT infrastructures of these days, virtualization enables both portability of entire virtual servers to off-premises data centers for disaster recovery and local virtual server replication for high availability. What used to require specialized hardware and complex cluster configuration can now be handled through a simple check box.


By applying the principles behind virtualization of compute to other areas such as storage, networking, firewalls and security, we can use its benefits throughout the data center. And it’s not just virtual servers: entire network configurations can be transported to distant public or private data centers to provide full architecture replication. Storage can be provisioned automatically in seconds and perfectly matched to the application that needs it. Firewalls are now part of the individual workload architecture, not of the entire data center, and this granularity against threats inside and out, yielding unprecedented security. But what does it all mean? Is the SDDC some high-tech fad or invention? If you ask me: absolutely not. The SDDC is the inevitable result of the evolution of the data center over the last decade.


I know there is a lot of marketing fluff around the datacenter, and Software Defined is one of them, but for me the SDDC is for a lot of companies the perfect fit for this time. What the future will bring, who knows where we stand in 10 years! The only thing we know is that a lot of companies are struggling with the IT infrastructure and need help in bringing the environment to the next level. SDDC is a big step forward for most (if not all) of us, and call it what you like but I’ll stick to SDDC


The Hybrid Cloud

Posted by arjantim Aug 23, 2016

Keeping it easy this post. It'll be just a small recap to build some tension for the next few posts. In the last two posts, I talked a little about the private and public cloud, and it is always difficult to write everything with the right words. So, I totally agree with most of the comments made, and I wanted to make sure a couple of them were addressed in this post. Let’s start with the cloud in general:




A lot of you said that the cloud is just a buzzword (or even just someone else’s computer).


I know it’s funny, and I know people are still trying to figure out what cloud is exactly, but for now we (our companies and customers) are calling it cloud. And I know we techies want to set things straight, but for now let’s all agree on calling it cloud, and just be done with it (for the sake of all people that still see the computer as a magical box with stardust as its internal parts, and unicorns blasting rainbows as their administrators.)


The thing is, I like the comments because I think posts should always be written as conversation starters. We are here to learn from each other, and that’s why we need these comments so badly.


The private cloud (or infrastructure) is a big asset for many of the companies we work for. But they pay a lot of money to just set up and maintain the environment, where the public cloud just gives them all these assets and sends a monthly bill. Less server cost, less resource cost, less everything, at least that’s what a lot of managers think. But as a couple of the comments already mentioned, what if things go south? What if the provider goes bankrupt and you can't access your data anymore?


In the last couple of years, we've seen more and more companies in the tech space come up with solutions, even for these kinds of troubles. With the right tools, you could make sure you’re data is accessible, even if your provider goes broke and the lights go out. Companies like Zerto, Veeam, Solarwinds, VMware and many more are handing you tools to use the clouds as you want them, while still being in control and able to see what is going on. We talked about DART and SOAR, and these are very important in this era and the future ahead. We tend to look at the marketing buzz and forget that it's their way of saying that they often don't understand half of the things we do or say, and the same goes for a lot of people outside the IT department. In the end they just want it KISS, and that where a word like "cloud" comes from. But let's go back to hybrid.


So what is hybrid exactly? A lot of people I talk to are always very outspoken about what they see as hybrid cloud. They see the hybrid cloud as the best of both worlds, as private and hybrid clouds combined. For me, the hybrid cloud is much more than that. For me, it can be any combination, even all public, but shared among multiple providers (multi-cloud anybody?!? ), or private and public clouds on-premises, and so on. In the end, the cloud shouldn't matter; it should just be usable.


For me, the hybrid solution is what everybody is looking for, the one ring to rule them all. But we need something software-defined to manage it all.


That's why my next post will be about the software-defined data center. It's another buzzword, I know, but let's see if we can learn a bit more from each other on where the IT world is going to, and how we can help our companies leverage the right tools to build the ultimate someone else’s computer.


See you in Vegas next week?!?


The Public Cloud

Posted by arjantim Aug 9, 2016

A couple of years ago nobody really thought of Public cloud (although that might be different in the US), but things change, quickly. Since the AWS invasion of the public clo space we’ve seen a lot competitors try to win their share in this lucrative market. Lucrative is a well chosen word here as most of the businesses getting into this market take a big leap of faith, as most of them have to take their losses for the first couple of years. But why should Public Cloud be of any interest to you, and what are the things you need to think about? Let’s take a plane and fly to see what the public cloud has to offfer, and if it will take over the complete datacenter or just parts of it?


Most companies have only one purpose and that is to make more money then they spend… And where prices are under pressure there is really nly one thing to do, cut the cost. A lot of companies see the public cloud as cutting cost, as you’re only paying for the resources you use, and not for all the other stuff that is alsoo needed to run your own “private cloud”. And because of this they think the cost of public cloud is cheaper than building their datacenters every 5 years or so.


To be honest, in a lot of ways the companies are right. Cutting cost by moving certain workloads to the public cloud will certainly help to cut cost, but it might also be a great test/dev environment. The thing is you need to determine the best public cloud strategy per company, and it might even be needed to do it per department (in particular cases). But saying everything will be in the public cloud is a bridge to far for many companies…. At the moment.


A lot of companies are already doing loads of workloads in the public cloud, without even really understanding it. Microsoft Office 365 (and in particular outlook) is one the examples where a lot of companies use public cloud, sometimes even without really looking into the details and if it is allowed by law. Yes, that’s right going public you need to think of even what can and what can’t be put in the cloud. Some companies are prohibited to by national law to put certain parts of their data in a public cloud, so make sure to look for everything before telling your company or customer to go public.


Most companies choose a gentle path towards public cloud, and choose the right workloads to go public. This is the right way to do if you’re an established company with your own way, but than again you need to not only think of your own, but also about the law that your company needs to follow.


In my last post on Private Cloud I mentioned the DART framework, as I think it is an important tool to go cloud (private at first, but Public also). In this post on Public Cloud I want to go for the SOAR framework.


Security - In a Public Cloud environment it really important to Secure your data. IT should make sure the Public part(s) as well as the Private part(s) are well secured and all data is save. Governance, compliancy and more should be well thought of, and re-thought of every step of the way.


Optimization - the IT infrastructure is a key component in a fast changing world. As I already mentioned a lot of companies are looking to do more for less to get more profit. IT should be an enabler for the business, not some sort of firefighters.


Automation - is the key to faster deployments. It’s the foundation for continuous delivery and other DevOps practices. Automation enforces consistency across your development, testing and production environments, and ensures you can quickly orchestrate changes throughout your infrastructure: bare metal servers, virtual machines, cloud and container deployments. In the end automation is a key component for optimization


Reporting - is a misunderstood IT trade. Again it is tidely connected with Optimization but also automation. For me reporting is only possible with the right monitoring tools. If you want to be able to do the right reporting you need to have a “big brother” in your environment. Getting the rigt reports from public and private is important, and with those reports the company can further finetune the environment.


There is so much more to say, but I leave it with this for now. I really look forward on the comments, and I know there is no “right” explanation for private, public, or hybrid cloud but I think we need to help our companies to understand the strenght of cloud. Help them sort out what kind to use and how. We’re here to help them use IT as IT is meant to be, regardless of the name we give it. See you next time, and in the comments!


The Private cloud

Posted by arjantim Jul 30, 2016

In a private cloud model, the control of a secure and unique cloud environment to manage your resources is done by your IT department. The difference with public cloud is that the pool of resources is accessible only by you and therefore it makes management much easier and secure.


So, if you require a dedicated resource, based on performance, control, security, compliance or any other business aspect, the private cloud solution might just be the right solution for you.


More and more organisations are looking for the flexibility and scalability of cloud solutions. But many of these organisations struggle with business and regulatory requirements that keep them from being the right candidate for public or private cloud offerings, they think.


It can be that you work within a highly regulated environment that is not suitable for public cloud, and you don't have the internal resources to set up or administer suitable private cloud infrastructure. On the other hand, it might just be that you have specific industry requirements for performance that aren't yet available in the public cloud.


In those cases it could just be that the private cloud as an alternative to the use of public cloud, is a great opportunity. A private cloud enables the IT department, as well as the applications itself, to access IT resources as they are required, while the datacentre itself is running in the background. All services and resources used in a private cloud are defined in systems that are only accessible to the user and are secured towards external access. The private cloud offers many of the advantages of the public cloud but at the same time it minimises the risks. Opposed to many public clouds, the criteria for performance and availability in a private cloud can be customised, and compliance to these criteria can be monitored to ensure that they are achieved.


As a cloud or enterprise architect a couple of things are very important in the cloud era. You should know your application (stack) and the  way it behaves. By knowing what your application needs, you can determine which parts of the application could be placed where, so private or public. A good way to make sure you know your application is using the DART principle:


Discover          -           Show me what is going on

Alert                -           Tell me when it breaks or is going bad

Remediate      -           Fix the problem

Troubleshoot   -           Find the root cause



If  you run the right tools within your environement, it should be easy to discover what is going on in your environment and where certain bottlenecks are, and how your application is behaving and what the requirements for it are, the step to hybrid is much easier to make, but that is for another post, first I'll dive into public cloud a little further next time.


You guys rock!

Posted by arjantim May 26, 2014

The last couple of weeks I've been the storage ambassador for the thwack community. I've really enjoyed sharing my thoughts with you guys and reading all of your comments. I noticed that the thwack community is great and there is a lot of knowledge floating around, that needs to be shared.


In the ever changing world of IT, storage is a component that really matters. In a Software Defined world where every cloud does things as a service Storage is one of the keystones. Laying the foundation is done by choosing the materials and plotting the right sizes. I've seen a lot of companies struggle with their infrastructures because they didn't lay the right foundation.


It's not really where I wanted to go. As said, I really liked the interaction in the Thwack community. And it made me wonder what you guys think of it, and where you see room for improvement.


What are the questions you need answers on?


What would you want to hear more about?


What resources do you use?


There is so much we can help each other with, and I know you guys and girls rock, but let's keep helping each other! I for one are more then willing to answer the questions you have, and I'm eager to hear what you are using, reading and working on?


See you in the comments! And keep on rocking!

With a lot of customers running their infrastructure almost a 100% virtualized these days, I see more and more people moving away from array-based replication to application-based (or hypervisor-based) replication, with Zerto ZVM, VMware vSphere replication, Veeam Replication, PHD virtual ReliableDR and Hyper-converged vendors offering their own replication methods the once so mighty array-based replication feature, seems beaten by application-based replication.


Last week I went to a customer where Storage-based replication was the only replication type used, but their architects were changing the game in the virtual environment and application-based replication was the road they wanted to ride... Where virtualization administrators and most of the administrators saw a lot potential in application-based replication the storage administrators and a some of the other administrators were more convinced of what storage-based replication offered them for all these years


What about you?Do you prefer storage-level replication? or is application/hypervisor level replication the way to go?


In IT we're used to having a buzzword every now and then. Most technicians just continue doing what they're good at and maintain and upgrade their infrastructure in a manner only they can, as they now how the company is serviced in the best way possible, with the available funds and resources.


As a succesor to the cloud buzzword we now have software defined everything. Software Defined DataCenter (SDDC), Software Defined Networking (SDN), Software Defined Storage (SDS) and so on. And although I don't like buzzwords, there is a bigger meaning behind Software Defined Everything.


  When looking at the present DataCenter you can see that the evolution of hardware has been very impressive. CPU, Storage, RAM and Network have evolved to a stage where software seems to have become the bottleneck. That's where Software Defined Everything comes into the picture, making the software that will use the potential of the hardware. My only point is that everything should be "Software" defined is bypassing everything the hardware vendors have done , and will continue doing, which is a tremendous amount of hard work, research and development to give us the Datacenter possibilties we have today. So naming it "software defined" is wrong if you ask me.


  Looking at Storage there is a lot of great storage software to leverage the just as great storage hardware. Looking at  some of the Solarwinds Storage Manager features like:



  • Heterogeneous Storage Management
  • Storage Performance Monitoring
  • Automated Storage Capacity Planning
  • VM to Physical Storage Mapping

You can see hoq software uses the hardware to provide the technician the tools they need to manage their datacenter, and giving the customer the IT they need. In future posts I would like to go deeper in some of these features, but for now I just wanted to share my thoughts on why software is an extension to hardware AND the other way around. You’re more then welcome to disagree, and leave your view in the comments.


See you in the comments

During the last few weeks (or even years) there was a lot of news on Flash. Flash as a Cache, All Flash Arrays (AFA) Hybrid Arrays and Flash in memory banks (UltraDIMM). It seems the Start-ups make a lot of rumble on all this new technology, but looking at the announcements made during the first two days during EMCWorld2014 it looks like the big fish are hunting the small ones, and will soon circle around the smaller fish, they’re ready to eat them… or not ;-D


Processing the IOPS as close to CPU as possible and with the quickest medium out there seems to be the way to go. But we need a way to leverage this . Which application goes on which medium and when will the medium “Flush” the data to the next Tier? Or will we just rely on caching software and let it decide where the data will be stored?


Everybody seems to be using Flash as the Tier behind RAM, but does that mean HDD’s are dead? Or will they function as the capacity tier as long as Flash is much more expensive then traditional storage? And what about the future, like ReRAM or other Flash successors?


But in the midst of all this rumble there is a question that needs aswering… How to manage this new storage world and a special question is how to manage the multiple storage arrays a lot of the companies have these days. I’m  writing a VMware design for a company in the Netherlands and they have HP and NetApp storage in their Datacenters, and managing them is not as simple as it can be… What is your secret to manage the datacenter of the future (or the one you’re managing now), or do you already use the perfect tool and are you willing to share?

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.