Running a trial: How hard can it be?
Running a trial: How hard can it be?
by Bill Snelgar - Plant & Food Research and Shane Max - Zespri
Your trial is valuable
Research trials are about learning so you need to believe your data, even if the trial does not show what you hoped. Carrying out a ‘rough’ comparison then discarding the results you don’t like - “the rows must have been different!” - but keeping those that fit your theory is a waste of time. You are only confirming your prejudices - you are not learning anything new.
Orchardists and scientists generally carry out trials to see if they can improve the productivity and, therefore, the profit of an orchard. This means the outcomes can be of huge value, and any errors can be costly.
Reaching a wrong conclusion can:
• Add needless cost, by encouraging you to use a product or technique that does not work;
• Lose money by not using a product or technique that does work.
With the payment structure used in the kiwifruit industry, even small increases in fruit size or fruit dry matter can give large increases in orchard gate return (OGR) (see Photo 1). For a Hayward orchard carrying 10,000 trays of fruit, we estimate that:
• 1 percent dry matter (0.15 TZG) is worth about $3,700/ha;
• 3g fresh weight (1 count size) is worth about $2,500/ha.
It is not easy to carry out trials that will always give you the correct answer when differences are this small. To avoid costly mistakes, a trial usually needs:
• A control – where no treatment is applied so that you can see exactly what your treatment has changed;
• Replication – to give some idea of the inherent variation among vines/fruit. Replication makes calculation of error possible;
Photo 1. If this spray increases the size of these (Green14) fruit by 3g, it’ll be worth about $3,000/ha, but it costs about $1200/ha to apply it. You really need to know if it works or not.
• Randomisation – to avoid treatment bias and ensure that effects unknown to the experimenter are averaged out for each treatment.
Control
In a trial, you will want to see what changes are caused by your treatment so you need the control vines to be very similar to your treated vines to start with. Control vines should not be:
• Your best row or block;
• The shelter row;
• The odd-shaped block in the corner;
• Your neighbour’s orchard;
• Last year’s crop.
Using good controls can actually help you a great deal when assessing the value of a treatment. For instance, Figure 1 shows the results of a trial where two alternative chemicals were compared with Hi-Cane®. The results look very promising, with the alternatives producing more than 11,000 trays/ha. Most orchardists would be very
happy with these new products. However, this trial also had a control where vines were not sprayed with anything. These vines yielded over 10,000 trays/ha. With this additional information, you’d probably decide that although all chemicals did increase yield, basic orchard management was the dominant factor producing the high yields in this orchard.
Replication and pseudo-replication
This is one of the hardest things to grasp - if you spray 10 vines in an orchard row, why don’t you have 10 replicates? The answer lies in the number of decisions you make about the trial layout. If you spray one row and leave another unsprayed,
you have made one decision and have one replicate of each treatment (Figure 2-A). This design is not statistically analysable and may well be misleading, since the two rows may be slightly different anyway - especially if one row is a shelter row. You are probably better off not doing this trial. In version B (Figure 2-B), at least the Figure 1. Results of a spray trial showing the yield of Hayward after vines had been sprayed with Hi-Cane®, or with alternative chemicals.
Randomise
The aim is to make sure you spread the treatments around the block randomly but without giving yourself the chance to bias the layout by choosing where treatments go. It is a good idea to select the vines for your trial carefully and reject those in poor condition or in badly-performing areas of the orchard.
For instance, if you are worried that the north end of the block may crop better than the southern end, then make sure you tag equal numbers of vines in each end but then randomly assign treatments within each half of the block. You can see in Figure 2-C, we have made sure there are three replicates of each treatment in the north end of the block and three in the southern end.
It is tempting simply to treat every second vine but this can lead to bias if there are gradients in productivity along the block. Randomising is the safest way to lay out a trial, plus it is easy. If you have only two treatments, flipping a coin is the quickest way. With more treatments, you may want to roll a dice.
treatments are spread across both rows, so bias should be reduced, but there are only two replicates and any difference is unlikely to be statistically detectable. Layout C is the one we’d use in a scientific trial. From the pattern of colours, it is obvious that treatments have been allocated to each vine one by one. The unit of replication is the item to which the treatment is applied individually – so individual vines are much better than rows or part-rows here. One glance at layouts A and B tells you they have poor replication and you should be very sceptical of any findings from such a trial.
Pseudo-replication is the term used when someone analyses the layout in
Figure 2-A and claims that they have 10 replicates. Analyses of this type are entirely unacceptable and are likely to be highly misleading. But it happens. Be very worried when someone says the trial was not properly randomised but we analysed it anyway. Statistics: making your analyses objective
Orchards and vines are not all identical so any time we measure attributes like fruit size and dry matter, we will usually see that some vines, or blocks, are better than others.
In our trials, we have found that the average dry matter of fruit typically varies by about 1 percent between vines. This variation can easily hide a good result, while a poor trial layout may also throw up large differences that are simply due to the between- vine variation we expect in any block. Statistics are the only way of deciding if the difference you see is due to the treatment you applied - or merely from chance variation. Even statistical analyses are not infallible. It is conventional to accept a difference as ‘significant’ if the probability of the difference occurring by chance is 1 in 20 (often referred to as P=0.05). That means that if you test 20 treatment comparisons, you are likely to obtain one ‘significant’ difference just by chance.
If run correctly, simple ‘on orchard’ trials have the ability to increase our knowledge rapidly, and cheaply. There are a number of resources to help growers and technical staff to undertake and analyse trials, including a series of KiwiTech Bulletins developed by Plant & Food Research on this topic. If you are unfamiliar with setting up trials, you are encouraged to discuss your idea with one of the team from Zespri’s Orchard Productivity Centre or with a friendly scientist. This should not only include how to set up the trial up but also how to measure and analyse the effects.





