Day 1 is assigned to a specific date by the 2ME team based on your batch start date.
📅 Check your confirmation email for your full task schedule.
Haven't received it? Email hello@clarigital.com and we'll sort it out quickly.
📅 Week 5 · Tuesday
day-26
What is Clarigital?.
Today you'll learn: what Clarigital is and why businesses pay for it — explained so clearly you could teach it to your parents by tonight.
⏱ ~20 mins
📖 Read + Quiz + Submit
✅ Need 3/5 to unlock
🔒 Tuesday only
Week
Week 5 of 8
Day
26 of 56
Program
2-Month Program
📖 Read This First — About 8 Minutes
Most A/B test results are wrong. Not because the testing was dishonest, but because it was done without sufficient sample size or duration.
A/B testing is the gold standard of marketing optimisation — and also one of the most commonly misapplied practices in digital marketing. The core problem: most practitioners declare test winners too early, on insufficient data, resulting in decisions based on random variation rather than real performance differences. A test that runs for 3 days with 50 conversions per variant is not a statistically valid experiment — it's a coin flip with extra steps.
Statistically valid A/B testing requires three conditions: sufficient sample size (typically 100+ conversions per variant), sufficient test duration (minimum 1-2 weeks to control for day-of-week effects), and statistical significance (typically 95%+ confidence that the result isn't random). Without all three, test results should be treated as directional signals, not actionable conclusions.
🎲
A/B testing without statistical significance is like judging a coin as biased after 10 flips. If you flip a fair coin 10 times and get 7 heads, you might conclude the coin is biased toward heads. But statistically, 7 heads in 10 flips happens 12% of the time by pure chance. Only after 100+ flips does the pattern become statistically meaningful. Marketing tests work the same way. Small samples produce random noise that looks like signal.
📐
Test Design
Test one variable at a time. Define the primary metric before the test starts. Set minimum sample size and duration before looking at results. Never peek and stop early.
🔢
Sample Size Calculation
Use an online calculator (abtestguide.com). Input: baseline conversion rate, minimum detectable effect, desired confidence. Get: required sample size per variant. Never launch without this.
📊
Statistical Significance
95% confidence = 5% probability the result is random. 99% for major decisions. Tools calculate this automatically. Never declare a winner below 95%.
❌
Common A/B Test Mistakes
Stopping early when one variant looks like it's winning, testing during unusual periods (holidays, promotions), changing other variables during the test, multiple simultaneous tests.
🔮 The test priority hierarchy: Always test highest-impact elements first. Headline on a landing page (can produce 20-50% conversion change) before button colour (rarely >5% change). Google Ads: test headlines (highest impact) before extensions. Email: subject line (determines open rate) before CTA copy. The effort of running a test should be proportional to the maximum possible impact of the variable being tested.
💡
Read the reference page below before taking the quiz.
What are the three conditions required for a statistically valid A/B test?
A
A: Duration of 7 days, 100 total visitors, 80% confidence
B
B: Sufficient sample size (100+ conversions per variant), minimum 1-2 week duration, and 95%+ statistical confidence
C
C: Equal budget for both variants, same audience, same time of day
D
D: A/B test tool installed, Google Analytics connected, minimum Rs.10,000 budget
OK All three conditions must be met: sufficient sample size (100+ conversions per variant), sufficient duration (1-2 weeks minimum to account for day-of-week variance), and statistical significance (95%+ confidence). Missing any one invalidates the result.
NO Three conditions for valid A/B tests: 100+ conversions per variant, 1-2 week minimum duration, 95%+ statistical significance. All three required. Missing any one means results could be random noise.
Question 2 of 5
A landing page test has run for 3 days. Variant B shows a 25% higher conversion rate than Variant A, with 40 conversions per variant. Should you declare Variant B the winner?
A
A: Yes - 25% improvement is significant
B
B: No - 40 conversions per variant is insufficient for statistical validity. Continue the test until 100+ conversions per variant.
C
C: Yes - after 3 days any result is reliable
D
D: No - wait exactly 30 days regardless of sample size
OK 40 conversions per variant is insufficient. The 25% difference could easily be explained by random variation at this sample size. Continue the test to 100+ conversions per variant before evaluating. The 3-day duration is also too short to capture day-of-week variation.
NO 40 conversions is insufficient. With small samples, large apparent differences are often random noise. Continue to 100+ conversions per variant and 1-2 weeks minimum before declaring a winner.
Question 3 of 5
What does '95% statistical confidence' mean in the context of an A/B test?
A
A: The winning variant will produce 95% more conversions
B
B: There is only a 5% probability that the observed difference is due to random chance - 95% probability it reflects a real performance difference
C
C: 95% of users saw the test
D
D: The test ran for 95% of the planned duration
OK 95% confidence means: if we ran this test 100 times under the same conditions, 95 times the winning variant would truly outperform, and 5 times the apparent winner would be a random fluke. This is the standard minimum threshold for acting on A/B test results.
NO 95% confidence = 5% probability the result is random. In 100 identical tests, 95 would show the same winner for real reasons, 5 would show a false winner by random chance. Standard minimum threshold.
Question 4 of 5
An e-commerce company wants to A/B test their entire website against a complete redesign. Why is this a poor test design?
A
A: Full website tests are not allowed by Google
B
B: Testing multiple variables simultaneously makes it impossible to identify which specific change caused any performance difference
D: Full website tests always take too long to complete
OK When many elements change simultaneously, you can't isolate causation. If the redesign converts better, was it the headline, the layout, the images, the CTA, or some combination? Valid A/B tests change one variable. Multivariate testing (multiple simultaneous variables) requires much larger sample sizes and specialist analysis.
NO Multiple simultaneous changes prevent attribution. If the redesign performs better (or worse), you can't identify which specific change caused it. Test one variable at a time to isolate causation.
Question 5 of 5
Which element should be tested first on a landing page and why?
A
A: The footer links - they're seen by the most engaged visitors
B
B: The background colour - it affects the entire visual impression
C
C: The headline - it has the highest potential impact on conversion because it determines whether any visitor reads further
D
D: The button shape - CTA design is most important for conversion
OK The headline is the highest-leverage test element. It's the first thing visitors read, determines whether they engage further, and has the potential to change conversion rate by 20-50% in either direction. Test highest-impact elements first to maximise the return on testing effort.
NO Headline first. It determines whether visitors continue reading. High-impact tests (headline, hero image, CTA copy) before low-impact tests (button shape, footer, background colour).
–of 5
Answer all 5 questions, then check your score.
✏️ Your Task
🔒
Score 3/5 to unlock this
Complete the quiz above first. The moment you score 3 or more, this section unlocks.
🏅
🎉 Day 26 — done!
Day 27 opens Tuesday.
📝 Today's Task
Someone in your family runs a small business. In 3–4 sentences, explain Clarigital to them like you're actually WhatsApp-ing them right now. Your own words — not copied from the page.
Start like this: "So there's this platform I was reading about — it's basically for businesses that get too many WhatsApp messages to handle manually. It lets them..."
0 / 800
From your registration confirmation email. Can't find it?
Submitting before 11 PM IST on your assigned Tuesday counts as Day 26 complete.
Week 5 · Coming Tomorrow
Day 27 — Client Onboarding - Technical Handover & KickoffOpens Tuesday on your assigned date.