In my many years as a Basis Administrator, makeshift Database Administrator, Landscape Architect, and Teen Drama Counselor I have been through a ton of various projects. Be it implementations, upgrades or migrations, they all present varying degrees of difficulty. None are quite as complex as a migration though.
Navigating SAP tools, notes, media, licensing, hardware specifics, etc. can turn into the stuff of nightmares. I've had projects that began as something akin to Guardians of the Galaxy, and then suddenly took a complete 180-degree turn, feeling like a cheesy 80s sci-fi movie like Flash Gordon, where that magic just feels… gone. Projects such as these tend to inherently increase burnout of the team performing them, and generally feel like a Three Stooges skit throughout.
The following paragraphs are dedicated to those projects that suffer from one unexpected challenge after another. We'll explore where several of these went wrong (in my most humble opinion) and with any luck, provide a no holds barred assessment for problematic projects that you're working on (if you're brave enough to be introspective).
Matt, what gives with the title?
If you’ve been in the SAP world long enough, then you’ve invariably had ‘the feeling’. That moment when you’re about to go into a phase of your project, like dry-run perhaps, and quite literally have no idea whether it’s going to work. The hum of the fluorescent lights is audible; your heart palpates and breath goes short as the single drop of sweat comes off your brow. You know that if it's not error free, there will be a delay to the project. To quote a Tennessee-ism, your ass is ‘chewing the seat'
It’s at this point your mettle is tested, because nothing up to this point has gone right in the entire project, but it all comes down to this moment here. Odds are, you’re thinking to yourself ‘it didn’t have to go this way’. Odds are, you’re right. I’ve been there, kindred soul, and fought this fight, went head down, and said in my mind… "Hold my beer and watch this".
1. The full-sized sandbox
This is something we preach constantly here at Bluefin. If you look at many of our migration related blogs, they mention copying production down to your sandbox environment. To be clear, the sandbox environment is typically the one which you do before development, the guinea pig, if you will. The primary reasoning for this is so we can cut our teeth on a production-sized system, with all the potential weirdness that comes along with it. If we can identify and rectify issues here, then we virtually start our project out ahead of the curve. Dust that dude off, and you have an ideal staging environment that sets the tone for the rest of the project. Note that this only works if you consider point 2…
Here you’ll give me one of two arguments:
“Sandbox is nowhere near sized like Production”
“But refreshes take <x> days to finish”
I don’t mean to be ugly here, but I’m afraid I must… suck it up, buttercup. Your sandbox, from a CPU/RAM perspective doesn’t have to be the same size as Production. That said, you do want to make sure that it’s consistent with the production app tier (i.e. if I have 10 8cpu x 64GB RAM servers for production, have 1 or 2 of that same size for SBX, see below as to why). As for the time and effort for a refresh, that’s the cost of doing business, and simply put, it’s the right thing to do. The simple question is would you rather spend the time on a system where you have the luxury of time, or within a mission critical Production system. Trust me, it’s a horrible feeling when you hit a unique problem in Production.
2. Consistency is key
I often hop up on the soapbox and evangelize the importance of consistency in the environments. From application server sizes, database memory ratios, profile parameters, and even Operating System setups, we need to act, to a certain extent, like our sandbox, development, and test environments are ‘production-like’. How can I possibly expect the production server to act the same, if every environment is entirely different? Make sure that if you’re running some sort of clustering or disaster recoverythat it is replicated in one of the systems before it. This is critical and can save you days of potential downtime and misery.
3. Practice, practice, practice
So, we’ve done the full-sized sandbox, development, and QA. We have a right swagger to our walk because we’ve tuned and retuned the process until we have it down, right? Nope… if you’ve got the time, use the motto ‘do it again’. That’s right, refresh the system and jam another mock run out. It tightens up all your estimated timings, let’s you practice hand-offs, and generally gets rid of the typical flotsam and jetsam of awkwardness you tend to run into for your production run.
4. Testing, testing… 1,2,3…
This, one hundred times one thousand, this. How confident are you that a full regression test covers all your business processes? This is the question you should ask yourself, over and over again. Then one more time just for chuckles.
As much as I know that 100% testing is virtually impossible, you should have enough variation in your testing to certify nearly all the business processes you run regularly. There should be no excuses here. Do not let someone tell you ‘the data’s not right’, or ‘it’s too much trouble’. I wish I could express the number of post go-live issues where the question was asked, ‘was it tested?’ And the response... simply crickets. When someone renders one of those insanely weak arguments, come to one realization. It’s a trap, and you will pay for this later.
5. Give yourself the most precious commodity… time
The biggest mistake I’ve seen made, time and time again, is that of compressing the timeline for a migration. Think about it this way; when you plan a major upgrade project if it normally takes you six months, front to back, don’t plan on performing the migration in three. Remember, test thoroughly, practice often, and plan it all into your timeline.
6. Calm the hell down
After reading that title, your response is probably something like, ‘how dare you!?!?’, but I’m dead serious here. There is nothing that promotes more mistakes than panic. Consider this in your next project, how well do you face adversity? Or more importantly, how well does your team face adversity? Breaking the concentration of the do-er, in the middle of his/her do-ing is dangerous. Do you want to earn the respect of your hard-working, spit-in-the-face-of-adversity project team? Well then, run interference for them next time they’re facing an issue. Stem the invariable tide of status requests that come their way. Ask them if you can get them some coffee, ask them if there’s any way you can help, then for the love of all things soft and cuddly, get out of their way and let them do their job. This will earn you tons of respect and you will officially be the best manager / project manager / program manager ever.
Then after, just for grins, buy them a fermented beverage of their choice. Bottoms up for successful projects!