Have you ever tried Cold Emailing?

Krishna Anubhav
3 replies
Here’s the biggest mistake cold emailers make when A/B testing: "We look at reply rates instead of the actual replies" According to data-scientists, reply rates are not a reliable metric until you get 100 replies per cold email variation. If you end a test BEFORE you get 100 replies per variation, you won’t know (with confidence) which email performed better! I’m no Ph.D., but that means if you get a 10% reply rate per variation, you’d need to send 2,000 emails before you can properly run an A/B test.

Replies

Brian Regan
I've done cold emailing. It isn't easy, but if you put in the work upfront and are willing to experiment you can get good results (real customers). If you put in the work to define your ICP, user personas and are ruthless about collecting only those contacts, you will have much more success. In my experience this type of work has increased engagement (opens, replies, booked meetings) from low 10-20% up to 70-80%. I also wouldn't rely on 'replies' as the only metric to rely on whether a campaign is effective. You will want to track how many emails are opened and how many times those emails are opened. Lastly on this point, following up on LinkedIn after your cold email is the most effective tactic I've used to date.
@brian_regan I completely agree with this
Pablo Fatas
I mean obviously in an ideal world you would have a lot of data but just because the data isn’t what we call “statistically significant” (which is an arbitrary term by the way) it still gives information. It is also no true that you need at least 100 replies for something to be significant. If you have email A with 20% reply rate and 40 replies and email B 1% with 1 reply. This is still statistically significant. Now if you have 2 of them that are closer say email A has 20% with 40 replies and email B has 15% with 45 replies. Then you cannot assume that email A is better than email B. BUT if you had to pick on you should still pick email A because it is more likely that that is the better one. You never get enough data on small scales in the startup world and it is important to use a combination of this statistical reasoning and your own gut feeling.