Marketing Automation and Selling to Scientists

May 3, 2013

Recent talk I gave at the The Institute of Direct and Digital Marketing. How we use Marketing Automation to help sell our equipment to scientists.


When Vince Cable Came to Town – The Value of Mentors

October 3, 2012

This is the text of short talk I gave at the SJIC when Vince Cable came to Cambridge to launch the Growth Accelerator.

Our company makes microchip gas sensors. As three young engineers, when we got started we knew something about building a microchip, but nothing about building a business. In our first pitch to a VC they we were discussing the product and they asked, “What’s your margin”, not knowing what margin actually was, we put a smile on our face and said emphatically, “oh, our margin is very good”, hoping this was the right answer and adding this to the list of things we had to look up on wikipedia that evening.

When we got our first term sheet for $2M, we threw ourselves into learning everything we could about term sheets, the legalese and building a a wonderful spreadsheet model that we intended to deploy to maximum effect during the negotiation! Our first ever mentor was Walter Herriot who used to run the St John Innovation Centre; we arranged a meeting to get his advice. We walked into the office, told him about the offer and thumped the printout of the spreadsheet on the table. Walter looked at us, then at the mass of paper in front of him, slowly pushed it aside and said, “guys, has anyone else offered you $2M?”. This was a lesson for us in the difference between knowing about something and actually knowing what to do about it. This is where the experience and advice of a mentor is truly powerful.

Innovation, enterprise and entrepreneurship will play a large part in injecting dynamism and growth into the economy as we slowly recover. Any program that helps the chances of a nascent technology or company, succeed, employ people and grow on the world stage should be seized with both hands. The GrowthAccelerator aims to help 26,000 high growth businesses; a great ambition and one that I hope it succeeds in. I’ve always been amazed at the willingness of successful entrepreneurs in Cambridge and beyond, who give up large portions of their time to share their experience and expertise with the companies of tomorrow.

As I look at our company today, with 47 employees, $17M in capital raised, contracts with some of the worlds largest companies, it is easy to see that we are we we are because of the help of others.


A Sales and Marketing ‘Traveller File’

September 4, 2011

At Owlstone we manufacture complex instrumentation, with lots of parts, and lots of people involved at each step in the process. We have a file of information called a ‘manufacturing traveller’ that starts empty and at the end of the manufacturing line, is full of documentation and test results. At each step in the build process, different members of the team will complete some tests, add the results and pass the traveller on to the next team member. Each has their own responsibility and is trying to answer a specific question, but the integrated output answers a whole set of questions that no one person can answer individually. I think there is a parallel between the manufacture of physical goods and the manufacture of sales.

A sales person in the field, tends to have the clearest understanding of the customer problem and value of the solution. However, this doesn’t always make it downstream, with high fidelity, to those who generate marketing collateral. Equally the marketing team will be generating significant insight into the sales messages that work through testing of ad copy, landing pages, email copy , offers etc. A sales and marketing ‘traveller file’ that gets passed between the team, could help connect different parts of the cycle. For a new product or application the infield sales team will be trying to understand the customer pain and decide if there is a ‘rough fit’ with your offering. They are populating the traveller with information on the problem, the value of a solution, hot buttons, prospective customers, client interviews etc. This is the analogous to the exploration stage in the lean start-up methodology. When it comes to validation, other members of the team will start to get involved; the traveller gets passed on to marketing who start to put together some collateral and run test campaigns. The goal is to achieve product-market fit and as a by product the traveller is now populated with solid empirical data – the inklings of working campaigns and channels, persuasion assets that resonate. The collected traveller is to able to answer questions that no one person can answer on their own.


Quick and Dirty Data Appending

April 25, 2011

With internet based marketing you generally don’t want to ask for too much information on a webform. One common practice is to take the minimum amount of information on the form and use a ‘data appending’ service e.g. Jigsaw, InsideView or LinkedIn to build out the details for a new lead e.g address and company information.

A quick and dirty, but effective technique is to send an email to your lead database on a public holiday – a high percentage of people have their Outlook ‘Out of Office’ switched on, which means you get a return email, often with a lot of useful detailed  information, e.g. job title, direct phone number that can be used to append the lead record in your CRM system.


On average, our customers have one breast and one testicle – the problem of forecasting

July 17, 2010

The ‘Flaw of Averages‘ is alive and well in sales forecasting in CRM software. I have fallen in love with salesforce.com (big time) and have been playing about with the forecasting module; great, but is doesn’t capture the binary nature of a sales opportunity. If I have an opportunity for $100k and assign a subjective probability of 60% that we will get it, what should we put in a forecast? Expected value = $100k x 60% = $60k – not really cos it either ends up $0 or $100k. Same principle that “The average human has one breast and one testicle” (Des McHale).

When these normalised expected values are used you don’t get a sense of the pipeline variance. For example, if we have 100 opportunities of $1k, each with a likelihood of 50% or, 1 opportunity of $100k with a likelihood of 50%, the expected value is the same, $50K, but I know which of these I would rather include in a forecast.

It’s Monte Carlo time: salesforce.com has a nice integration with Excel to export reports. Here is an example spreadsheet with a bunch of random values for the sale value and subjective likelihood to close

DOWNLOAD THE MONTECARLO FORECAST SPREADSHEET

What does it do? It extracts the pipeline data from salesforce and creates a list of the opportunities, their sales value and probability to close. A macro is then used to simulate 300 ‘possible outcomes’ i.e. in one outcome we may close on the first two opportunities and not the third, in a different outcome we may only close on one opportunity etc. From these simulated outcomes we then can pull out some stats; the expected sales figure, min, max and the variance i.e. possible spread of results.

Lots of eggs in different baskets: if we have 100 opportunities of $1k, each with a likelihood of 50% then it turns out that it is very likely we’ll close on at least $40k worth of business. The corollary is that it is also very unlikely that we’ll close on more than $60k of business. Like death and taxes, you have the inevitability of making a trade between risk and reward. Having a good estimate of the variance is very useful for planning especially if you need to watch cash flow. Also handy to keep in check the ever present cognitive bias of over confidence.


The graph above is a value-at-risk (VAR) chart for the case of 100 opportunities of $1k, each with a likelihood of 50%. The distribution shows that the most likely outcome is around the $50k mark, which is what the expected value should be. Most of the action happens in the $40-$60k region so you can use these as your upper and lower estimates when rolling up into a financial model.

In the example above it is pretty easy to calculate the exact expected return and variance. The Monte Carlo approach becomes useful in the real world case where the individual sales values and likelihood to close are all over the map. I’ve also found it useful because the integration sucks the real data out of salesforce seamlessly. In the example below there is a larger $40k opportunity with a likelihood of close of 70% grouped in with the smaller $1k opportunities; what we now see is a bimodal distribution, where depending on whether we close the big deal we are sitting on the lower or upper distribution – for the electronic engineers reading, there’s a sales ‘band gap’ J


Using this type of forecasting approach integrated with your real world CRM data helps figure out what the pipeline risk is more realistically. The big health warning is that you need good historically data on the likelihood to close, at different parts of your sales cycle and to apply consistently across a sales team, or you’ll end up with GIGO.


Pragmatic Problem Solving

December 5, 2009

There are many ways to come at a problem ranging from thorough analysis through to use of simple heuristics and rules of thumb. I always like it when people can get to an answer by looking at it slightly differently. I came across this one in Baeyer’s book, Information.

Samuel Morse, of Morse code fame, wanted to develop the most efficient way to code letters so they could be transmitted quickly. The principle of achieving this is pretty obvious; the most efficient code assigns short symbols to common letters, and long symbols to rare ones. He then had to answer the question what is common and what is rare? What is the order of the frequency with which letters appear in English? One way to gather such statistics is to select a text, and count the number of times each letter appears. This method works well for the three or four most common letters but it becomes less reliable for the more uncommon ones, such as Q, X, Z, unless the reference text is very long. Besides, who wants to count letters from a 1000 page book. Morse’s pragmatic solution was a lot quicker; he walked into a newspaper office and counted the number of letters in each compartment of the printers box. Presumably decades of experience had reduced its contents to an efficient compromise between supply and demand. Since he found more Es than any other letter, E is represented by a single dot, followed by T with a dash. X,Y and Z, on the other hand, whose compartments in the type box where relatively empty, drew four symbols each.

It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world, rather than resting solely on a priori reasoning, intuition, or revelation

An Algorithm for Business Success?

November 29, 2009

It seems that testing is the flavour of the month in business these days. All the presentations I go to talk about A/B split testing and multivariate Taguchi methods. Of course the guiding principle of testing is a good one; but I think it gives some  the misguided notion that business is a purely deterministic process and that persistent testing  provides an algorithm for success (or quick, cheap failure, which is also good). There are some useful parallels between empiricism and its critics.

What am I actually testing? The process seems pretty simple; do A/B tests on your google ads, your landing pages, your email blasts, your automated workflows etc etc. Eke out success one word change at a time. How do you know you are isolating the one thing you want to test? How do you know you are not just locally optimising in totally the wrong place.

The empiricists and positivists thought the only source of knowledge is experience. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world, rather than resting solely on a priori reasoning, intuition, or revelation. Sounds reasonable. Quine illustrated problems with this view in the “Two Dogmas of Empiricism”. Quine argued for a holistic theory of testing; he thought that you cannot understand a particular thing without looking at its place in a larger whole. Holism about testing says that we cannot test a single hypothesis in isolation; instead we can only test complex networks of claims and assumptions. To test one claim you need to make assumptions about many other things e.g. measurement equipment, data quality etc. So whenever you think you are testing a single idea, what you are really testing is a long, complicated conjunction of statements. If a test has an unexpected result, then something in that conjunction is false, but the failure of the test itself does not tell you where the error is.

Take an example of ‘test the business model over a period of one year’, the background assumptions and  conjunction of interdependencies are legion. Two things can happen; you can say it doesn’t work when there is a simple element, which can be changed easily, in the web of dependencies that is the cause of failure  i.e. you get a false negative. A wrong pricing decision for example. You can also ‘forgive’ a fundamental problem by saying that something else in the chain is the cause i.e. a false positive. For any complex business decision the theory is always underdetermined by the available evidence i.e. there will always be a range of possible alternative theories compatible with the set of evidence. So what good is my test if it doesn’t tell us something definitive?

It didn’t work this time is different from it doesn’t work. People are also very keen with the notion of failing fast and failing cheap. Once again admirable but how do you know when you have failed? Karl Popper thought science progressed by a process of falsification; from the problem of induction you could never say that a general statement was true from a handful of observations but you could say the statement was false if an observation contradicted it. The issue of underdetermination rears its head again; you could never force someone to logical conclude that a theory was false because it may be a background assumption that is at fault. Falsification also struggles with probabilistic statements; take the example of proton decay – some grand unified theories predict that a proton should decay into new X bosons. During the 80′s there were a lot of experiments and they never saw a proton decay. They were able to put a lower limit of the proton half-life of 6.6×10^33 years but were not able  to say that it doesn’t decay. Most people may conclude that it doesn’t decay but the key thing is that they have to make a choice to believe so, it does not follow logically from observation. Doing a split test on a low volume search term feels a bit like waiting for proton decay.

Now take an example like James Dyson – he made 5,126 prototypes of his vacuum cleaner before hitting the big time. Why did he not declare that he had failed quickly and cheaply after the first 10 tries? Often it is difficult to know if you have the admirable quality of persistence or whether you are just a nutter.

Putting things to the test is a good idea but it only really works in a very well bounded context; most of the success stories come from web-based business that have a large enough user base to derive useful conclusions. For the majority of businesses there will be other things that matter a great deal more.  A business has a huge amount of knobs that you can turn, the only problem is that you can’t turn them all independently of each other. Basically I don’t think people should spend a lot of their time obsessing with analytics. Doing things intuitively has served a lot of people well for a very long time. If anyone can figure out how to do an A/B split test on the ‘cut of your jib’ please let me know.

It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world, rather than resting solely on a priori reasoning, intuition, or revelation

Follow

Get every new post delivered to your Inbox.