Skip to main content

Migration plans fail when analysis and execution drift apart

I read migration plans all the time. The ones that fail share one thing: the analysis and execution drift apart until the document is just a historical artifact nobody trusts. The part I watch is where the analysis team hands off a perfect map, and the execution team immediately starts discovering unmapped territory.

This is not about bad planning. It is about a structural gap. Analysis wants certainty; execution lives in uncertainty. When they stop talking, the migration becomes a story where nobody knows the ending until it is too late to change it.

Martin Fowler's recent fragments on harness engineering hit this directly. He writes about feedback loops being the core of modern delivery, not the artifacts we produce. A migration plan without tight feedback is just a wish list with dates attached.

The tension lives between the architect who wants a complete picture and the engineer who needs to ship something today. I have been both roles. I know the architect's fear of the unknown. I also know the engineer's frustration with a document that does not match the code.

The useful part is recognizing this tension as normal. The annoying part is pretending it does not exist. Most migration failures I have seen start with the best intentions and end with a parallel system built in the dark.

The advantage is leverage. Tight feedback between analysis and execution turns migration from a leap of faith into a series of small, reversible steps. The fear is discovering the real complexity at the 90% mark when you are out of time and budget.

I would rather have a living document that changes every week than a perfect one that is wrong by the time it is printed. The most accurate migration document is the one nobody reads, because the team is too busy updating it. That is not a bug; it is a feature.

Where Teams Usually Get It Wrong

The mistake is treating analysis as a phase and execution as another. This creates a handoff. Handoffs are where context dies. The analysis team produces a 200-page document. The execution team starts coding. Within a week, they find three assumptions that were wrong.

I usually look for the seams. The seam is where the analysis document meets the actual codebase. That is where the drift starts. The document says the system uses a single database. The code shows five connections, two of which are legacy and undocumented.

Azure AI Services documentation updates every month. GitHub releases ship faster than analysis can document them. I read the Azure OpenAI what's new page and see features that did not exist when the migration plan was written. That is not a failure of analysis. It is a failure of assuming analysis can be complete.

The common shallow approach is to freeze the analysis and defend it. This is brittle. It creates a situation where the execution team is punished for finding new information. I have seen teams hide discoveries because updating the plan would delay the project.

The better approach is to treat the analysis as a starting hypothesis, not a contract. The execution team's job is to test that hypothesis and report back. The architect's job is to listen and adjust. This is not a handoff; it is a conversation.

A Better Working Shape

I keep coming back to a simple shape: analyze one week, execute the next, review, repeat. This is not waterfall. It is not agile. It is a tight loop. The analysis team spends a week understanding a slice of the system. The execution team spends a week migrating that slice. Then they compare notes.

What surprised me was how small the slices need to be. A single API endpoint. A single database table. A single user flow. Anything larger and the feedback loop is too slow. The drift has time to build up.

The practical question for me is always: what is the smallest thing we can migrate and learn from? Not ship. Learn. Learning is the goal of the first 80% of a migration. Shipping is the last 20%.

Martin Fowler's article on harness engineering for coding agents makes this point. He writes about encoding team standards into fast feedback loops. A migration is similar. The standard is: does the new code match our understanding? The feedback loop is: run it, test it, measure it, adjust.

I would rather migrate one endpoint perfectly and learn from it than migrate ten endpoints poorly and be surprised. The leverage is in the learning, not the shipping. Shipping without learning is just moving code from one place to another.

What to Watch in Practice

The part I would watch is the communication pattern. Are the analysts and engineers in the same room? Do they share the same Slack channel? Do they review each other's work? If the answer is no, the drift has already started.

I usually care more about the questions being asked than the answers in the document. A question like "how does this handle retries?" is more useful than an answer that is probably wrong. The question leads to discovery. The answer leads to assumption.

The Azure AI Foundry docs show this pattern. They do not just list features. They show scenarios, tradeoffs, and decisions. This is what a migration plan should look like. Not a list of tasks, but a set of decisions with context.

GitHub release notes are another good model. They are short, concrete, and linked to actual code. A migration plan should be a series of release notes, not a project plan. Each note is a decision, a change, and a reason.

The useful part is tracking the delta. What did we think would take a week? What actually took a week? The difference is the drift. I would rather track the drift than hide it. The drift is the real information.

Day-to-Day Impact

In day-to-day work, this changes how you spend your time. You spend less time writing the plan and more time updating it. You spend less time defending assumptions and more time testing them. You spend less time in meetings and more time in code.

The annoying part is the overhead. You have to update the document every week. You have to have the conversation. You have to admit when you were wrong. This is uncomfortable. It is also the only way to avoid the 90% surprise.

The fear is that without a perfect plan, you are flying blind. The reality is that with a perfect plan, you are flying blind with a map that is wrong. The map is not the territory. The territory changes while you are drawing the map.

I read the Azure OpenAI what's new page and see features that did not exist last month. I read GitHub releases and see breaking changes that invalidate my assumptions. This is the reality of modern software. The migration plan has to live in this reality, not in a static document.

The part I do not trust yet is any migration that does not have a feedback loop built in. I do not care how smart the team is. I do not care how thorough the analysis is. If the analysis and execution are not talking weekly, the plan is already wrong.

What I would rather do is start with a hypothesis, test it quickly, and adjust. This is not a new idea. It is just one that is hard to follow when the pressure is on to ship. The pressure makes you want a perfect plan. The reality makes you need a living one.

This is where it gets messy. The business wants certainty. The engineers want to learn. The architects want control. The only way to satisfy all three is to make the learning fast and the control adaptive. This is not a technical problem. It is a communication problem.

The practical question for me is: what is the smallest feedback loop you can create today? Not tomorrow. Today. Can you analyze one endpoint and migrate it by the end of the week? Can you review the results together? If not, why not?

That is usually where things get messy. The answer is usually "because we have a plan." The plan becomes the enemy of the migration. The plan is a historical artifact. The migration is a living process. They cannot be the same thing.

I would rather have a process that adapts than a plan that is perfect. The process is the plan. The document is just a snapshot. The snapshot is useful, but it is not the thing itself.

The thing itself is the conversation between analysis and execution. Keep that conversation tight, and the migration has a chance. Let it drift, and you are just building a parallel system in the dark.

What is the smallest slice you can migrate and learn from this week?

Resources Worth Reading