Pre-publish vs post-mortem: the right way to fix short-form video
Most video analysis happens after the views are already decided. The math says the leverage is before. A breakdown of what each approach catches, what it misses, and where each fits in a real creator workflow.
Two ways to analyze a short-form video. You can score it before you publish it, or you can study it after the platform has already distributed it. Most creators and most tools do the second. The math, the workflow, and the algorithm all favor the first.
This post lays out the difference. Not as marketing positioning, but as a practical question of what each approach actually catches, what it misses, and where each one fits in a realistic creator workflow.
What post-mortem analysis is good at
Post-publish analysis is the default. You check Instagram Insights, TikTok Analytics, or YouTube Studio after a video has run for 24-72 hours and you read the curves.
It's good at three things.
Confirming what you suspected. If you had a hunch a hook was weak, the drop-off curve confirms it. The numbers turn intuition into evidence.
Comparing across videos. Native analytics show your videos relative to each other. You can see that Reel A held 60% to completion while Reel B held 28%, even if you can't see why.
Closing the loop on macro decisions. Topic decisions, posting time, hashtag strategy, format experiments. Post-mortem data is what you'd use to decide whether to keep producing a series, whether posting at 11am beats 7pm in your audience, whether long captions help.
For these three jobs, post-mortem is the right tool. There's no other way to get them.
What post-mortem analysis is bad at
The problem is that creators rely on post-mortem for a fourth job it can't do well: improving the next video.
The feedback is too coarse. Native analytics show drop-off, but the resolution is usually 5-10 second buckets. Most short-form video is 15-30 seconds. A 5-second bucket on a 20-second video is 25% of the runtime. You see that viewers dropped between second 5 and second 10. You don't see that the drop happened at second 7.4 right after a specific phrase that stalled the pacing.
The actionable fix lives at the per-second level. The reporting lives at the per-bucket level. The gap is where the diagnostic falls apart.
The feedback comes too late. By the time you read analytics on Video N, you've usually already shot Video N+1. The cycle time on creator workflows is faster than the analytics feedback loop. Even if the diagnostic were perfect, the timing is wrong.
The feedback is biased toward survivorship. Videos that didn't get distributed enough don't accumulate enough views to trust the curves. The 200-view video has analytics, but the sample is too small to read confidently. So you ignore it. The 50K-view video gets your attention. You optimize for the survivors. The pattern that killed the 200-view video stays uncorrected.
The feedback doesn't tell you why. You see what happened. You don't see why. Drop-off curves describe the symptom. The cause might be a hook, a CTA, a pacing fault, an audio mismatch, a niche-pattern misfit. Native analytics can't separate these. You're left guessing which variable to change.
This is why most creators run the same playbook after every flop, fix the wrong variable, and plateau. The diagnostic is bad, not the creator.
What pre-publish analysis is good at
Pre-publish analysis scores the video against patterns proven to perform before the algorithm sees it. You upload the draft, get a per-segment breakdown with scores and specific fixes, and decide whether to ship, re-cut, or re-shoot.
It's good at five things.
Catching weak hooks before they get distributed. The opening 3 seconds determine 60-70% of retention. A weak hook ships once and then trains the algorithm to deprioritize the channel. Pre-publish scoring catches the hook problem before it costs you distribution.
Diagnosing pacing at the per-second level. A pre-publish scorer can read the second-by-second pacing rate and flag the specific stall point. "Drop predicted at 0:08, second beat repeats the first; cut it" is a fix. "Retention drops in the 5-10 second bucket" is a description.
Scoring against patterns from your niche. Native analytics compare your videos to your other videos. Pre-publish scoring compares your video to the patterns that work across thousands of high-performing videos in your category. The reference set is broader and the diagnostic is sharper.
Killing weak videos before they ship. Sometimes the right answer to "should I post this" is no. Native analytics can't tell you that. They can only tell you, three days later, that you shouldn't have. Pre-publish scoring can.
Compounding across the channel. Algorithm distribution is reputation-weighted. Channels with consistent quality get pushed harder than channels that mix strong and weak. Pre-publish scoring tightens the average over months by filtering out the worst videos before they hit the feed.
What pre-publish analysis is bad at
Honest list. Pre-publish has limits.
It can't read your audience's specific reaction. A pattern that scores well in general can still misfire with your specific audience. Post-mortem catches this; pre-publish doesn't.
It can't substitute for taste. Scoring is structural. A creator with a strong sense of what their audience cares about will catch things a scorer misses. The scorer is a floor, not a ceiling.
It depends on the quality of the reference dataset. A scorer trained on the wrong niche, the wrong format, or stale data will give bad advice. The model is only as good as the videos it learned from.
It can over-fit toward formulaic content. If every video gets optimized to score well, every video looks similar. The creators who break out of plateaus often do it by violating one of the patterns the scorer would have flagged. Pre-publish should be a guardrail, not a straitjacket.
Where each fits in a real workflow
The right answer is not pre-publish or post-mortem. It's both, in the right order.
Pre-publish for every draft. 5-minute scoring pass before you post. Catches the structural failures (hooks, pacing, CTA, niche-pattern drift). Either you re-cut, re-shoot, or kill the draft.
Post-mortem for series-level decisions. After a batch of videos has run, look at the analytics in aggregate. Which series is working? Which topic is plateauing? Which posting time outperforms? Use the macro data for macro decisions, not for trying to diagnose individual videos.
Pre-publish for the next video, not post-mortem on the last one. When the question is "what should I shoot next," the answer comes from pre-publish scoring of the new draft, not from re-reading the analytics on yesterday's flop. The flop's diagnostic is too coarse to fix the new video. The new draft's pre-publish scoring is exactly the right resolution.
This is the workflow change. Stop trying to fix Video N+1 by re-analyzing Video N. Score Video N+1 directly, before it ships.
The category gap most tools fall into
Most "AI video analysis" tools, including a few well-funded ones, are post-mortem tools that didn't realize they were post-mortem tools. They take a video URL, transcribe it, summarize it, and produce some scoring or breakdown. The user is expected to be a marketing analyst who reads the output and decides what to do.
That's analysis. It's not decision support.
Decision support means the tool tells you what to change, not just what's there. "Your hook scored 62. Add a pattern interrupt before the payoff. Predicted retention lift: +18%." That's prescriptive. "Your hook scored 62" alone is descriptive.
The category split between description and decision is the actual fight in this market. The description tools have most of the funding and most of the visibility. The decision tools are where the value is, because creators don't have time to do their own analysis. They need the analysis pre-digested into a fix list.
Lomero is built on this premise. Paste a draft or a public URL, get scores plus specific fixes, decide whether to ship. Free tier covers 5 analyses per month, which is enough to score every meaningful draft for a creator posting 3-4 times per week.
The summary
Post-mortem analysis is good at the macro questions and bad at the per-video diagnostic. Pre-publish analysis is good at the per-video diagnostic and limited at the macro questions. Use both. Lead with pre-publish for individual videos. Use post-mortem for series and channel decisions.
Most creators currently use only post-mortem. That's the gap. The leverage is in the workflow change, not in trying harder at the analytics you already have.
Related: how to predict video performance before posting is the practical scoring framework. Why some hooks work and others don't covers the mechanism layer that pre-publish scoring is built on.