Back to Blog
7 min read

How Lomero's hook score works (the 0-to-100 number explained)

The hook score isn't a vibe check. It's a weighted rubric across pattern match, payoff timing, audience signal, and specificity. Here's the full breakdown.

Every Reel, TikTok, and YouTube Short you paste into Lomero gets a hook score from 0 to 100. Most tools that quote a "virality number" are doing a vibe check and calling it an algorithm. Lomero's score is a weighted rubric with specific inputs, and this post explains every one of them.

The goal isn't to turn content into a paint-by-numbers exercise. It's to give you a defensible answer to "why did this hook work?" that's better than "I don't know, the algorithm liked it."

What the score is measuring

The score is a retention probability estimate for the first 3 to 5 seconds of the video, calibrated against patterns that historically held attention on short-form platforms. It's not a prediction of views or a promise of virality. It's a read on whether the opening is doing the job the opening has to do.

Scores fall into rough bands:

80 to 100. Strong hook. The opening names a promise, signals an audience, and delivers a first beat of payoff in under 3 seconds.

60 to 79. Workable hook. The pattern is recognizable but at least one lever is weak. Most fixable problems live here.

40 to 59. Soft hook. The opening is vague, slow, or buried. Likely to underperform even with strong content after the hook.

0 to 39. Broken hook. Usually a sign the video opens with setup, text-only cold open, or a soft ramp-up that gives the viewer a clean reason to scroll.

The four components

The total score is the weighted sum of four components. The weights vary slightly by platform, but the components are the same.

1. Pattern match (30%)

Does the opening follow a hook pattern that historically works on this platform? Lomero's rubric references patterns like the contrarian claim, specific number, direct callout, mistake reversal, time-boxed promise, and others covered in the hook pattern posts for TikTok and YouTube Shorts.

Matching a pattern gets you baseline credit. Executing the pattern well adds more. A contrarian claim that's defensible and specific scores higher than a contrarian claim that's vague or unoriginal.

2. Payoff timing (25%)

How quickly does the opening deliver a first beat of the payoff?

If the video teases something interesting at 0:02 and shows a piece of it at 0:04, the timing is strong. If the first beat of payoff lands at 0:10, the timing fails regardless of how clever the hook is.

TikTok timing tolerances are tighter than Instagram, and Instagram is tighter than YouTube Shorts. The rubric accounts for this.

3. Audience signal (25%)

Does the opening name who the video is for? Either explicitly ("if you're a freelance designer...") or implicitly through vocabulary and framing.

Hooks that signal the audience early filter bad-fit viewers fast and hold good-fit viewers longer. Hooks that try to please everyone usually please nobody. This component penalizes vagueness and rewards specificity.

4. Specificity (20%)

How specific is the language in the opening? Precise numbers, named examples, concrete claims score higher than generic ones.

"I saved money on groceries" scores lower than "I cut my grocery bill by $347 last month." Not because $347 is magical but because specificity signals the speaker has real experience, which signals the payoff is worth waiting for.

The platform adjustments

The weights shift by platform because retention behavior differs.

On TikTok, payoff timing gets weighted slightly higher because the drop-off curve is steeper. A 9-second hook fails on TikTok even if every other component is perfect.

On YouTube Shorts, audience signal and specificity get weighted slightly higher because viewers arrived with search intent. Hooks that don't match the intent clearly lose even when the pattern is otherwise strong.

On Instagram Reels, the weights are closest to the default split. Reels viewers have the most tolerance for build-up, so payoff timing matters somewhat less.

What the score doesn't capture

The score is about the hook. Everything after the hook is graded separately.

It doesn't capture the quality of the payoff itself. A strong hook with a weak payoff can still score 85 on the hook and flop overall. Lomero's segment-level analysis covers the parts past the hook.

It doesn't capture visual or audio quality. If the hook is brilliantly written but the video looks like it was filmed in 2011, the score won't reflect that.

It doesn't predict views. Views depend on distribution, timing, creator history, platform algorithm state, and a dozen other factors the hook can't control. A strong hook improves the odds; it doesn't guarantee them.

It doesn't work on music-only content. No speech, no hook signal, no score.

How the score is generated, technically

The underlying pipeline is:

  1. Transcribe the video (Whisper-based, timestamped).
  2. Segment the transcript (hook, context, problem, reveal, CTA).
  3. Extract the hook segment (the first 3 to 5 seconds, depending on pacing).
  4. Evaluate the four components against the platform-specific rubric.
  5. Weight and sum.

The evaluation combines pattern-matching (from a catalog of known hook structures) with specific checks on timing, audience markers, and specificity signals. The output is the number plus a text explanation of which components drove the score.

Worked example

Take a Reel that opens: "Three hours after my wedding, I found out my dad had been lying to me for 20 years. Here's the story."

Pattern match: Confession + narrative promise. Strong. ~27 of 30. Payoff timing: Strong — the hook sets up a story that begins almost immediately. ~22 of 25. Audience signal: Broad. No specific audience callout, but the stakes make it universally interesting. ~18 of 25. Specificity: "Three hours after my wedding" is specific. "20 years" is specific. ~16 of 20.

Total: ~83. Strong hook.

Now take a variation: "I want to talk about something I've been thinking about recently. It's about family and how we handle secrets."

Pattern match: Soft ramp-up. Minimal. ~8. Payoff timing: No payoff in the opening. ~6. Audience signal: None clear. ~8. Specificity: Abstract. ~5.

Total: ~27. Broken hook. Same underlying story, different execution, 50+ point difference.

How to use the score

Treat it as a diagnostic, not a verdict. If you score 60, the follow-up question is: which of the four components is dragging the number down? Lomero's output tells you. If payoff timing is the problem, the fix is to pull the reveal forward. If audience signal is the problem, the fix is to add a callout.

Don't chase a 90 on every video. A score in the high 70s is strong enough for most videos to perform well. The 90-plus hooks tend to be high-stakes or confession-driven, which isn't appropriate for every topic.

Don't optimize the score at the expense of the message. The score rewards structural patterns; it can't tell you if your hook misrepresents your video. A hook that scores 90 but promises something the video doesn't deliver will tank trust fast.

Frequently asked questions

Is the score deterministic?

Mostly. The same video gets the same score each time. Small variations are possible because the pattern-matching step uses language models that can interpret edge cases slightly differently across runs, but the variance is typically 2 to 3 points.

Can I game the score?

Writing hooks that match the patterns will raise the score. Whether that translates to real retention depends on whether the rest of the video delivers. The score is a structural check, not a magic spell.

Does the score work for videos in languages other than English?

Yes. The rubric is translated across 40-plus languages. Accuracy is highest on languages the transcription engine handles well (Spanish, Portuguese, Mandarin, Hindi, and the major European languages).

What if my video doesn't fit any of the standard patterns?

Non-standard hooks can still score high if the four components are strong. Pattern match is only 30% of the total. A novel opening that signals audience and delivers payoff fast will still score well.

How does the score differ from Opus Clip's virality score?

Opus Clip's score is calibrated to generated clips from long-form video and is less transparent about its methodology. Lomero's score is for any short-form URL and the rubric is public (this post). Different scopes, different intended use.

Will the rubric change?

Yes. As short-form platforms evolve retention behavior, the weights and pattern catalog get updated. Changes will be documented when they happen.


Related: the five beats of a viral short-form video covers the segment labels after the hook, and why your short-form video isn't converting uses the score as the starting point of a diagnostic framework.