加入新推出的
Discord 社区,展开实时讨论,获得同行支持,并直接与 Meridian 团队互动!
使用过往实验设置自定义投资回报率先验
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
Meridian 需要传递分布以进行投资回报率校准。虽然使用过往实验的结果来设置自定义先验是一种可靠的方法,但在继续操作之前,您需要考虑许多细微差别。例如:
实验时间与营销组合建模分析 (MMM) 时间窗口的关系:如果实验是在 MMM 时间窗口之前或之后进行的,结果可能无法直接应用。
实验时长:如果实验时长较短,可能无法有效捕获营销效果的长期影响。
实验的复杂程度:如果实验涉及多个渠道,其结果可能无法让人清楚地了解单个渠道的效果。
被估量差异:实验中使用的被估量可能与 MMM 中使用的被估量不同。例如,MMM 的反事实是支出为零,而某些实验可能采用不同的反事实,如支出减少。
人口差异:实验中定位的群体可能与 MMM 中考虑的群体不同。
我们建议您根据自己对渠道效果的信心程度来设置自定义先验。先验信念可能受很多方面的影响,包括实验或其他可靠的分析。可以根据这种先验信念的强度来确定先验的标准差:
如果您对某渠道的效果深信不疑,可以对先验的标准差应用调整系数,以此反映您的信心。例如,假设您对某一渠道进行了多次实验,所有实验都得出了类似的投资回报率点估计值,或者您从过往 MMM 分析中获得的历史数据可证明此渠道的效果。在这种情况下,您可以为先验设置一个较小的标准差,这样分布就不会变化太大。这种更紧密的分布表明您对实验结果有很强的信心。
相反,考虑到前面列出的一些细微差别,实验不一定能转化为 MMM。在这种情况下,您可以选择对先验分布的标准差应用调整系数。例如,您可以根据自己的怀疑程度,为先验设置一个较大的标准差。
您应考虑在 ModelSpec
中使用 roi_calibration_period
实参。如需了解详情,请参阅设置投资回报率校准周期。
设置先验分布时,通常使用 LogNormal
分布。以下示例代码可用于将实验的均值和标准误差转换为 LogNormal
先验分布:
import numpy as np
def estimate_lognormal_dist(mean, std):
"""Reparameterization of lognormal distribution in terms of its mean and std."""
mu_log = np.log(mean) - 0.5 * np.log((std/mean)**2 + 1)
std_log = np.sqrt(np.log((std/mean)**2 + 1))
return [mu_log, std_log]
不过,如果之前的实验结果接近零,您应考虑非负分布(例如 LogNormal
分布)能否准确表示您的先验信念。我们强烈建议您先绘制先验分布图,确认其符合您的先验直觉后,再继续进行分析。以下示例代码展示了如何获取重新形参化的 LogNormal
形参、定义分布并从中抽取样本。
import tensorflow as tf
import tensorflow_probability as tfp
# Get reparameterized LogNormal distribution parameters
mu_log, std_log = estimate_lognormal_dist(mean, std)
mu_log = tf.convert_to_tensor(mu_log, dtype=tf.float32)
std_log = tf.convert_to_tensor(std_log, dtype=tf.float32)
# Define the LogNormal distribution
lognormal_dist = tfp.distributions.LogNormal(mu_log, std_log)
# Draw 10,000 samples
lognormal_samples = lognormal_dist.sample(10000).numpy()
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2025-08-17。
[null,null,["最后更新时间 (UTC):2025-08-17。"],[[["\u003cp\u003eMeridian's ROI calibration can be enhanced by incorporating prior knowledge from past experiments, but factors like experiment timing, duration, complexity, estimand, and population differences need careful consideration for accurate application.\u003c/p\u003e\n"],["\u003cp\u003ePrior beliefs about channel effectiveness should guide the standard deviation of the prior distribution, with stronger beliefs corresponding to tighter distributions and vice-versa.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eroi_calibration_period\u003c/code\u003e argument in \u003ccode\u003eModelSpec\u003c/code\u003e offers further control over the calibration process.\u003c/p\u003e\n"],["\u003cp\u003eWhile the \u003ccode\u003eLogNormal\u003c/code\u003e distribution is commonly used for setting priors, its suitability should be evaluated when experimental results are near zero, potentially requiring alternative distributions to better reflect prior intuitions.\u003c/p\u003e\n"],["\u003cp\u003eVisualizing the prior distribution before analysis is crucial for ensuring it aligns with expectations and avoids misrepresentation of prior beliefs.\u003c/p\u003e\n"]]],["Meridian uses passing distributions for ROI calibration, informed by prior beliefs and experiments. Before using custom priors, consider: experiment timing, duration, complexity, estimand differences, and population differences. Adjust prior standard deviation based on confidence; strong confidence warrants a smaller standard deviation. Use the `roi_calibration_period` argument in `ModelSpec`. `LogNormal` distributions are common, and sample code transforms experimental mean and standard error to `LogNormal` parameters. For near-zero experimental results, verify that the `LogNormal` distribution aligns with prior beliefs.\n"],null,["# Set custom ROI priors using past experiments\n\nMeridian requires passing distributions for ROI calibration. Although setting\ncustom priors using results from previous experiments is a sound approach,\nthere are many nuances to consider before proceeding. For example:\n\n- **The timing of the experiment in relation to the MMM time window:**\n If the experiment was conducted either before or after the MMM time window,\n the results might not be directly applicable.\n\n- **The duration of the experiment:** An experiment of short duration might\n not effectively capture the long-term effects of the marketing effectiveness.\n\n- **The complexity of the experiment:** If the experiment involved a mixture\n of channels, the results might not provide clear insights into the performance\n of individual channels.\n\n- **Estimand differences:** The estimands used in experiments can differ from\n those used in the MMM. For example, the MMM counterfactual is zero spend,\n whereas some experiments might have a different counterfactual, such as\n reduced spend.\n\n- **Population differences:** The population targeted in the experiment might\n not be the same as the population considered in the MMM.\n\nWe recommend setting the custom priors based on your belief in the\neffectiveness of a channel. A prior belief can be informed by many things,\nincluding experiments or other reliable analyses. Use the strength in that\nprior belief to inform the standard deviation of the prior:\n\n- If you have a strong belief in the effectiveness of a channel,\n you can apply an adjustment factor to the standard deviation of the prior to\n reflect your confidence. For example, suppose you have conducted several\n experiments for a particular channel and all the experiments yielded similar\n ROI point estimates, or you have historical data from previous MMM analyses\n that support the effectiveness of this channel. In this case, you could set a\n smaller standard deviation for the prior so that the distribution won't vary\n widely. This tighter distribution indicates your strong confidence in the\n experimental results.\n\n- Conversely, the experiment might not necessarily translate to the MMM,\n considering some of the nuances listed earlier. In this case, you might choose\n to apply an adjustment factor to standard deviation of the prior distribution.\n For example, you could set a larger standard deviation for the prior,\n depending on your level of skepticism.\n\nYou should consider using the `roi_calibration_period` argument in\n`ModelSpec`. For more information, see\n[Set the ROI calibration period](/meridian/docs/user-guide/configure-model#set-roi-calibration-period).\n\nWhen setting the prior, the `LogNormal` distribution is a common\none to use. The following sample code can be used to transform experiment's\nmean and standard error to the `LogNormal` prior\ndistribution: \n\n import numpy as np\n\n def estimate_lognormal_dist(mean, std):\n \"\"\"Reparameterization of lognormal distribution in terms of its mean and std.\"\"\"\n mu_log = np.log(mean) - 0.5 * np.log((std/mean)**2 + 1)\n std_log = np.sqrt(np.log((std/mean)**2 + 1))\n return [mu_log, std_log]\n\nHowever, if the results from previous experiments are near zero, you should\nconsider whether your prior beliefs are accurately represented by a\nnon-negative distribution, such as the `LogNormal` distribution. We\nhighly recommend plotting the prior distribution to confirm it matches\nyour prior intuitions before proceeding with the analysis. The following\nsample code shows how to get reparameterized `LogNormal`\nparameters, define the distribution, and draw samples from it. \n\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n # Get reparameterized LogNormal distribution parameters\n mu_log, std_log = estimate_lognormal_dist(mean, std)\n mu_log = tf.convert_to_tensor(mu_log, dtype=tf.float32)\n std_log = tf.convert_to_tensor(std_log, dtype=tf.float32)\n # Define the LogNormal distribution\n lognormal_dist = tfp.distributions.LogNormal(mu_log, std_log)\n # Draw 10,000 samples\n lognormal_samples = lognormal_dist.sample(10000).numpy()"]]