“Let’s migrate to Aurora!”
“Let’s provision more IOPS, more CPU, etc…”
Sound familiar? It happens all the time. Databases become slow and we start throwing hardware at them.
You’re #CuriousAboutData – so read on!
In some cases it does … briefly help. I remember one case where a DBA ran a tool from Microsoft that suggested new indexes. He called me in desperation – nothing works, the system is frozen.
So, we discussed some possibilities as to how to clean up the mess. Next day he called me up again – it’s all good, I solved the problem. A solution? Moved data to an SSD drive.
But in most cases, it doesn’t. That doesn’t matter, because more hardware is supposed to equal better performance, and no reality is going to destroy that assumption. In one extreme case, I remember a CTO kept ordering CPUs for the database and the DB would just swallow them up pinning the CPU at 100% like nothing. It took several weeks of this for him to allow real work to happen on the server to actually remedy the situation.
Why does this happen? I don’t really know, but I do have few ideas. So, being an experimenter that I am, I started testing.
Image courtesy: Getty Images
Hypothesis #1: Management is afraid of “open-ended” development. If you suggest refactoring that will take a long time, they will get cold feet because you may spend all the time and wind up with nothing to show for it.
Solution? Iterative development. Identify resource hogs, refactor them quickly and check results. If results are good, keep going, if not, re-evaluate.
Hypothesis #2: There is no time to pursue a real solution. Something has to happen … like yesterday … even if it’s just a hack.
Solution? Rapid refactoring. Using tools, we can significantly speed up database development.
Let us know what you think about this important topic. What are some of your factors that prevent things from being done in an optimal way?
We like people, so feel free to call or write. Humans only.