Submit by 11 PM IST
Week 8 · Day 48 of 56
Week 1
Day 1 ✓
Day 2 ✓
Day 3 ✓
Day 4 ✓
Day 5 ✓
Day 6 ✓
Week 2
Day 7 ✓
Day 8 ✓
Day 9 ✓
Day 10 ✓
Day 11 ✓
Day 12 ✓
Week 3
Day 13 ✓
Day 14 ✓
Day 15 ✓
Day 16 ✓
Day 17 ✓
Day 18 ✓
Week 4
Day 19 ✓
Day 20 ✓
Day 21 ✓
Day 22 ✓
Day 23 ✓
Day 24 ✓
Week 5
Day 25 ✓
Day 26 ✓
Day 27 ✓
Day 28 ✓
Day 29 ✓
Day 30 ✓
Week 6
Day 31 ✓
Day 32 ✓
Day 33 ✓
Day 34 ✓
Day 35 ✓
Day 36 ✓
Week 7
Day 37 ✓
Day 38 ✓
Day 39 ✓
Day 40 ✓
Day 41 ✓
Day 42 ✓
Week 8
Day 43 ✓
Day 44 ✓
Day 45 ✓
Day 46 ✓
Day 47 ✓
Day 48
⚡ Do This Right Now
1
Read the explainer
2
Pass the quiz (3/5)
3
Submit before 11 PM
🕚 Deadline: 11 PM IST
1
Read
2
Quiz 3/5
3
Submit
🕚 11 PM IST
🔒

This task is currently closed.

Day 1 is assigned to a specific date by the 2ME team based on your batch start date.

📅 Check your confirmation email for your full task schedule.

Haven't received it? Email hello@clarigital.com and we'll sort it out quickly.
📅 Week 8 · Saturday
day-48

What is Clarigital?.

Today you'll learn: what Clarigital is and why businesses pay for it — explained so clearly you could teach it to your parents by tonight.

⏱ ~20 mins
📖 Read + Quiz + Submit
✅ Need 3/5 to unlock
🔒 Saturday only
Week
Week 8 of 8
Day
48 of 56
Program
2-Month Program
📖 Read This First — About 8 Minutes

Growth hacking is not a shortcut. It is a discipline of rapid, structured experimentation that finds leverage faster than traditional marketing.

Growth hacking — more accurately called growth experimentation — is a systematic approach to finding the highest-leverage growth opportunities through rapid, low-cost experiments. The core insight is that most businesses have a small number of high-leverage growth levers that are either underused or undiscovered. The growth hacking discipline finds these levers faster than traditional quarterly planning cycles by running many small experiments, failing cheap, and doubling down on what works.

The practitioner framework uses two complementary tools: AARRR (Pirate Metrics) to identify which funnel stage is the current growth bottleneck, and ICE Scoring to prioritise experiments by Impact, Confidence, and Ease. Together these create a structured, repeatable process for identifying, prioritising, and running growth experiments without the waste of chasing every shiny idea.

🔬
Growth hacking is the scientific method applied to business growth. Hypothesis: if we change X, Y will improve by Z%. Experiment: test with minimum viable resource. Measure: did Y change? Learn: why or why not? Scale: if it worked, invest more. Kill: if it didn't, move on fast. The growth hacker runs 10 experiments where traditional marketers run 1. They're right more often not because they're smarter, but because they test more.
☠️
AARRR Metrics (Pirate Metrics)
Acquisition (how do users find you?), Activation (do they have a good first experience?), Retention (do they come back?), Revenue (do they pay?), Referral (do they tell others?). Identify your weakest stage first.
🧊
ICE Scoring
Impact (1-10: how much will this move the needle if it works?), Confidence (1-10: how sure are we it will work?), Ease (1-10: how simple is it to run?). Score = (I + C + E) / 3. Prioritise highest ICE scores.
Minimum Viable Experiment
Test the assumption with minimum resource before full commitment. A/B test before redesign. Prototype before product development. Manual before automation. Cheap to run, fast to learn.
📋
Experimentation Log
Document every experiment: hypothesis, test design, resources, results, decision (scale/kill/iterate). The log builds institutional knowledge. 3 months of experiments = pattern recognition no individual has.

🔮 The growth bottleneck first principle: Growth experiments must target the current bottleneck in the AARRR funnel, not the practitioner's favourite channel. If Retention is 20% (only 20% of users return after day 1), experimenting with Acquisition is waste - you're adding water to a bucket with a large hole. Fix Retention first, then Acquisition improvements compound on a solid base.

💡
Read the reference page below before taking the quiz.
🔬
Explore: Growth Hacking Frameworks - experimentation and scalingclarigital.com · Business Strategy · ~7 mins
🧠 Quiz — 5 Questions
🧠
Day 48 Quiz
Score 3 or more to unlock your submission. Retry as many times as you want — every wrong answer tells you why.
5 questions Need 3/5 Unlimited tries Instant feedback
Question 1 of 5
What does AARRR stand for and why is identifying the weakest stage the first step?
A
A: Advertising, Acquisition, Revenue, Reach, Retention
B
B: Acquisition, Activation, Retention, Revenue, Referral - the funnel stages. The weakest stage is the bottleneck that limits growth regardless of how well other stages perform
C
C: Awareness, Action, Reach, Revenue, Results
D
D: Analytics, Attribution, Reporting, Revenue, Return
OK AARRR = Acquisition > Activation > Retention > Revenue > Referral. The bottleneck is the stage that limits all downstream stages. High Acquisition with low Activation means most acquired users never get value. High Activation with low Retention means users get value but don't return. Fix the bottleneck first.
NO AARRR: Acquisition, Activation, Retention, Revenue, Referral. Identify the weakest stage because it's the bottleneck limiting all downstream growth. Improving acquisition when retention is broken is adding water to a leaky bucket.
Question 2 of 5
How does ICE scoring prioritise growth experiments?
A
A: Experiments are sorted alphabetically
B
B: Score = (Impact + Confidence + Ease) / 3. Higher scores indicate experiments most likely to move the needle efficiently
C
C: ICE selects only experiments with guaranteed results
D
D: Experiments are prioritised by cost (cheapest first)
OK ICE score = average of Impact (how much will this move the metric?), Confidence (how likely is it to work?), and Ease (how fast/cheap is it to run?). A high-impact, high-confidence, easy-to-run experiment scores highest and runs first. Low-impact or low-confidence experiments with high effort are deprioritised.
NO ICE = (Impact + Confidence + Ease) / 3. Score 1-10 on each dimension. Highest total score runs first. This prevents wasting effort on experiments that are hard, uncertain, or low-impact.
Question 3 of 5
What is a 'minimum viable experiment' and why is it the preferred approach for growth testing?
A
A: An experiment with the smallest possible budget
B
B: Testing a hypothesis with minimum resources before committing to full implementation - confirming the assumption is valid before investing in scale
C
C: A test that runs for a minimum of 30 days
D
D: An experiment that only requires one day to run
OK MVE: test the assumption cheaply before building the full solution. If you hypothesise that a checkout discount increases conversion, test it with a manual coupon code before building an automated cart discount system. If the hypothesis fails, you've spent 1 hour not 3 weeks.
NO MVE: test the assumption with minimum resource before committing to full build. Cheap experiments reveal whether the hypothesis is valid before expensive implementation. Fail cheap and fast.
Question 4 of 5
A SaaS product has a 70% Day-1 user drop-off. What does this indicate about their AARRR funnel and what stage should experiments target?
A
A: Acquisition is the problem - need more users
B
B: Activation is the bottleneck - 70% of acquired users are not having a good first experience. Experiments should target onboarding and first-session value delivery
C
C: Revenue is the problem - pricing is too high
D
D: Referral is the problem - users are not referring friends
OK 70% Day-1 drop-off = Activation failure. Users are arriving (Acquisition is working) but not experiencing sufficient value in their first session to return. Experiment targets: onboarding flow redesign, first-session 'aha moment' acceleration, tutorial optimisation. No point improving Acquisition until Activation works.
NO 70% Day-1 drop-off = Activation bottleneck. Users arrived but didn't experience enough value to return. Fix Activation (onboarding, first-session value delivery) before investing more in Acquisition.
Question 5 of 5
What is the primary value of an experimentation log maintained over 3+ months?
A
A: Compliance documentation for regulators
B
B: Accumulated institutional knowledge and pattern recognition about what works for this specific product and audience - preventing repetition of failed experiments and identifying replicable success patterns
C
C: A record for annual performance reviews
D
D: Evidence for investor due diligence
OK The experimentation log is a knowledge asset. After 3 months and 20+ experiments, patterns emerge: certain audiences respond consistently to certain messages, certain product changes reliably improve certain metrics, certain channels work in specific contexts. This pattern recognition is what distinguishes experienced growth practitioners from beginners.
NO Experimentation logs build institutional knowledge. Patterns emerge after 20+ experiments. Failed experiments prevent wasted reruns. Successful patterns reveal what works for this specific product and audience.
of 5
Answer all 5 questions, then check your score.
✏️ Your Task
🔒

Score 3/5 to unlock this

Complete the quiz above first. The moment you score 3 or more, this section unlocks.

🏅

🎉 Day 48 — done!

Day 49 opens Monday.

📝 Today's Task
Someone in your family runs a small business. In 3–4 sentences, explain Clarigital to them like you're actually WhatsApp-ing them right now. Your own words — not copied from the page.
Start like this: "So there's this platform I was reading about — it's basically for businesses that get too many WhatsApp messages to handle manually. It lets them..."
0 / 800
From your registration confirmation email. Can't find it?
Submitting before 11 PM IST on your assigned Saturday counts as Day 48 complete.
Week 8 · Coming Tomorrow
Day 49 — Career Paths in Digital Marketing Opens Monday on your assigned date.
Day 49 →
2ME · Clarigital Student Programs · clarigital.com Help: hello@clarigital.com
📋 Register a friend 🎁 Share your 2ME ID