Skip to main content
Log in

Why managers with low forecast precision select high disclosure intensity: an equilibrium analysis

  • Original Research
  • Published:
Review of Quantitative Finance and Accounting Aims and scope Submit manuscript

Abstract

Shin (J Account Res 44(2):351–379, 2006) has argued that in order to understand the equilibrium patterns of corporate disclosure, it is necessary for researchers to work within an asset pricing framework in which corporate disclosures are endogenously determined. Echoing this sentiment, Larcker and Rusticus (J Account Econ 49(3):186–205, 2010) have argued that earlier empirical results claiming to find a negative relationship between disclosure and cost of capital may suffer fatally from endogeneity issues which, once addressed by a formal structural model, may reverse the sign of the relationship. The purpose of this paper is to introduce a general equilibrium model following the Black–Scholes paradigm with endogeneous disclosure in which firms select uniquely determined optimal probabilities of early equity-value discovery in a noisy environment. As firms may differ also in the uncertainty (precision) with which management can forecast the future, managers strategically increase the intensity of their (voluntary) disclosures to provide partial compensation for this perceived differential risk. A positive relationship then results between disclosure and the cost of capital.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. We find that passing back and forth, via logarithms, between the additive arithmetic averaging of classical linear regression in respect of normal returns and its log-normal counterpart—a multiplicative geometric averaging of asset values—is straightforward and intuitive. The non-linearity of the logarithm turns out to be highly tractable.

  2. Note that the equilibrium cutoff is below the opening expected value.

  3. We shall discuss the possible relaxation of this assumption in Sect. 5.3.

  4. See for example McNeil et al. (2005), Section 2.2.4.

  5. H X (t) is strictly convex, positive and asymptotic to t − m X (by l’Hôpital’s Rule); clearly \(H_{X}(\underline{X})=0\) and \(H_{X}^{\prime}(\underline{X})=F_{X}(\underline{X})=0,\) where \(\underline{X}\) is the lower boundary of the support of F X (possibly \(-\infty,\) as for the normal, if that is admissible).

    $$ \lim_{t\rightarrow +\infty}\frac{\int_{x\leq t}(t-x)dF(x)}{t-m_{X}} =\lim_{t\rightarrow +\infty}\frac{\int_{x\leq t}dF(x)}{1}=1. $$
  6. See Ostaszewski and Gietzmann (2008). Proposition. Let H(t) be any twice differentiable, strictly convex function on \([\underline{X},\overline{X}]\) satisfying \(H(\underline{X})=0,\,H^{\prime}(\underline{X})=0\) and \(H^{\prime}(\underline{X})=m.\) Then H(t) is the hemi-mean function of a continuous distribution with mean given by

    $$ m=\overline{X}-H(\overline{X}). $$

    This needs reinterpretation when either limit of the support interval is infinite. For instance, when \(\overline{X}=+\infty,\)

    $$ m=\lim_{t\rightarrow \infty}(t-H(t)). $$
  7. \(\int_{u\geq \gamma}(u-m_{X})dF(u)=\int_{u\leq \gamma}(m_{X}-u)dF(u)=(m_{X}-\gamma)F(\gamma)+\int_{u\leq \gamma}F(u)du\)

  8. Level of equity, in our case.

  9. \({{\mathbb{E}}[\alpha X|\alpha X<\alpha \gamma ]=\alpha {\mathbb{E}}[X|X<\gamma ]}\) for α > 0; or note that

    $$ H_{\alpha X}(\alpha \gamma)=\int_{\alpha x\leq \alpha \gamma}\hbox{Pr } (\alpha X\leq \alpha x)d(\alpha x)=\alpha \int_{x\leq \gamma}\hbox{Pr } (X\leq x)dx=\alpha H_{X}(t). $$
  10. Another way of extending the original framework: interpret X parsimoniously as incorporating real effects, e.g. observation of an input price in a managed production process with certain (short time-scale) implementation leading to a sure realizable (indirect) profit.

  11. This observation provides the basis for relating in Sect. 2.4 the market price to investor beliefs—see Eq. (17), and the preceding footnote 21; such analysis leads to formula (21) for the cost of capital.

  12. Our approach starts with the risk-neutral (pricing) distribution of equity value; for an alternative approach to equilibrium pricing in which investor demand arises from mean-variance utility see Suijs (2011).

  13. For arbitrary distributions F, Fishburn studies preferences over F representable by a utility U(μ(F), ρ(F)) over two parameters associated with F: the mean μ(F) and a risk-measure ρ(F) of the general form \(\int_{\leq t}\varphi (t-x)dF(x)\) for \(\varphi \) non-negative non-decreasing with \(\varphi (0)=0,\) thus including our case when \(\varphi (t-x)=t-x.\) Such risk-measures capture notions of ‘riskiness’ for outcomes x below the target t.

    As both parameters of F are expectations under F, Fishburn’s preference is a ‘utility of expectations’ rather than an expected utility in the von Neumann-Morgenstern sense. By constructing in his context a further utility, whose expectation captures his own utility, Fishburn shows his approach to be equivalent to that of von Neumann-Morgenstern.

  14. So the optimization should be read as maximizing a utility U(μ (Fq), ρ (Fq)) over the parameters μ (Fq) = m X  − γ (q), i.e. an adjusted mean, and ρ (Fq) = H X (γ (q)), a lower partial moment, as in the Fishburn analysis of the preceding footnote. It is thus capable of being interpreted as an expected utility rather than as a utility of the two quantities μρ (which are expectations under the model (Fq)).

  15. The equilibrium condition places constraints on the size of λ, since 0 < F(γ) ≤ F(m) < 1. Note that λ < 1 iff q > 1/2.

  16. The marginal rate of substitution of a utility function is homogeneous of degree zero iff the utility function is homothetic or itself homogeneous of degree zero—see Forsund (1975).

  17. The work by Penno is a timely reminder of the care that needs to be taken when trying to extend theory to a model of real world practice. However, the following section shows that in fact Penno’s specific result on non-consistency (lack of monotonicity) arises because of the somewhat restrictive functional form he used. We generalize his approach and show what class of probability functions admit consistency.

  18. See Yermack (1997), Aboody and Kaznik (2000), and Gao and Shrieves (2002) for a discussion of strategic granting of options.

  19. In such a sequential market model (allowing trades at dates in between the interim and terminal dates), the manager’s trading, having become observable, is subject to inferential analysis. The revised managerial opportunity set necessitates that the optimal managerial behaviour (given the manager’s incentive) employs a mixed strategy of buying and selling—to preserve optimally the manager’s informational advantage. On game-theoretic grounds, one expects that the revised valuation of the manager’s ‘option to trade’ is a convex function of q, say of the form V(q)z(γ (q)), with V(q) taking zero value at the endpoints q = 0 and q = 1. (In this respect that is similar to the case with (1 − q) (m X  − γ (q))). See De Meyer and Moussa Saley (2002) for a sequential auction model yielding just such a result. Our theory applies also to such general valuations—subject to permitting trades for then (non-verifiable) observations below γ(q).

  20. 1 − F(γ) = 1 − λ2 = (1 − λ) (1 + λ) iff τ = q(1 − F(γ)) = 1 − λ, as q = 1/(1 + λ).

  21. The value γ is equivalently defined by a similar equation, with τ D  = τ :

    $$ 1=\tau_{D}{{\mathbb{E}}}^{{\mathbb{Q}}}[X|D(\gamma)]+\tau_{ND}\cdot \gamma. $$
  22. Mathematically speaking, this constant average rate replaces the state-varying ratio of the two probability densities (i.e. replaces the Radon-Nikodym derivative \({\hbox{d}{\mathbb{P}}/\hbox{d}\mathbb{Q}}\)).

  23. In a different two-period model setting (with mandatory disclosure) Gao (2010) derives a similar result, ascribing unexplained variance to risk-bearing in the later period—see Sect. 5.3.

  24. In their notation this result would read γ (q 1σ) < γ (q 2σ) provided 0 < q 2 < q 1, since λ is decreasing in q.

  25. As before, in their notation, one has γ (qσ 2) ≤ γ (qσ 1) provided 0 < σ 1 < σ 2.

  26. Formal proof of the Jacobian condition available from the authors.

  27. Details available from the authors

  28. See also Nishina et al. (2012).

  29. This links to Leuz and Schrands findings that firms increase their disclosures to try and reduce the cost of capital post Enron.

  30. This measures the variable by identifying the frequency of a subset of firm news-releases on Factiva.

  31. To be more precise, the disclosure intensity in part determines whether the state regime dummy variable D i,t is above or below a threshold, qualifying whether the firm is in the high volatility regime.

  32. See also Huang et al. (2004).

  33. This general argument includes the Penno (1997) analysis for the cutoff level of a normally distributed noisy signal as a special case.

References

  • Aboody D, Kaznik R (2000) CEO stock option awards and the timing of corporate voluntary disclosures. J Account Econ 29:73–1000

    Article  Google Scholar 

  • An MY (1998) Log-concavity versus Log-convexity: a complete characterization. J Econ Theory 80:350–369

    Article  Google Scholar 

  • Bawa VS (1975) Optimal Rules For Ordering Uncertain Prospects. J Financ Econ 2(1):95–121

    Article  Google Scholar 

  • Bernardo AE, Ledoit O (2000) Gain, loss and asset pricing. J Polit Econ 108(1):144–172

    Article  Google Scholar 

  • Bergstrom TC, Bagnoli M (2005) Log-concave probability and its applications. Econ Theor 26:445–469

    Article  Google Scholar 

  • Bingham NH, Goldie CM, Teugels JL (1987) Regular variation, 2nd edn. Encycl Math Appl 27, Cambridge University Press, Cambridge, (1st edition 1987)

  • Bingham NH, Fry JM (2010) Regression: linear models in statistics, SUMS. Springer, Berlin

    Book  Google Scholar 

  • Caplin A, Nalebuff B (1991) Aggregation and social choice: a mean voter theorem. Econometrica 59(1):1–24

    Article  Google Scholar 

  • Cascon A, Keating C, Shadwick W (2002/2003) The omega function (working paper)

  • Clinch G, Verrecchia RE (2011) Endogeneous disclosure choice (working paper)

  • Cox DR, Solomon PJ (2003) Components of variance. Chapman & Hall, London

    Google Scholar 

  • Cousin J-G, de Launois T (2006) News intensity and conditional volatility on the French stock market. Financ Presses Universitaires de France 27.1:7–60

    Google Scholar 

  • Dana R-A, Jeanblanc M (2003) Financial markets in continuous time. Springer, Berlin

    Google Scholar 

  • Danilova A (2010) Stock market insider trading in continuous time with imperfect dynamic information. Stochastics 82(1):11–131

    Google Scholar 

  • De Meyer B, Moussa Saley H (2002) On the strategic origin of Brownian motion in finance. Int J Game Theory 31:285–319

    Article  Google Scholar 

  • Dye RA (1985) Disclosure of nonproprietary information. J Account Res 23:123-145

    Article  Google Scholar 

  • Forsund FR (1975) The homothetic production function. Swed J Econ 77:234–244

    Article  Google Scholar 

  • Fishburn PC (1977) Mean-risk analysis with risk associated with below-target returns. Am Econ Rev 67:116–126

    Google Scholar 

  • Gao P (2010) Disclosure quality, cost of capital, and investor welfare. Account Rev 85(1):1–29

    Article  Google Scholar 

  • Gao P, Shrieves RE (2002) Earnings management and executive compensation: a case of overdose of option and underdose of salary? (working paper SSRN)

  • Gietzmann M, Ostaszewski AJ (2013) Multi-firm voluntary disclosures for correlated operations. Ann Financ. doi:10.1007/s10436-012-0222-1

  • Gietzmann MB, Trombetta M (2003) Disclosure interactions: accounting policy choice and voluntary disclosure effects on the cost of raising outside capital. Account Bus Res 33(3):187–205

    Article  Google Scholar 

  • Grossman S, Hart O (1980) Disclosure laws and take-over bids. J Financ 35:323–334

    Article  Google Scholar 

  • Huang W-C, Zhu Y (2004) Are shocks asymmetric to volatility of chinese stock markets? Rev Pac Basin Financ Mark Pol 7(3):379–395

    Article  Google Scholar 

  • Hull JC (2011) Options, futures and other derivatives. Prentice-Hall, Upper Saddle River, NJ

    Google Scholar 

  • Jung W, Kwon Y (1988) Disclosures when the market is unsure of information endowment of managers. J Account Res 26:146–153

    Article  Google Scholar 

  • Keating C, Shadwick W (2002) A universal performance measure. J Perform Meas 59–84

  • Kyle A (1985) Continuous auctions and insider trading. Econometrica 53:1315–1335

    Article  Google Scholar 

  • Larcker DF, Rusticus TO (2010) On the use of instrumental variables in accounting research. J Account Econ 49(3):186–205

    Article  Google Scholar 

  • Leuz C, Schrand C (2009) Disclosure and the cost of capital: evidence from firms’ responses to the enron shock (NBER working paper no. 14897)

  • Levy H (1992) Stochastic dominance and expected utility: survey and analysis. Manage Sci 38.4:555–593

    Article  Google Scholar 

  • McNeil AJ, Frey R, Embrechts P (2005) Quantitative risk management. Princeton University Press, Princeton

    Google Scholar 

  • Nawrocki D. (1999) A brief history of downside risk. J Invest 8.3:9–25

    Article  Google Scholar 

  • Nishina K, Maghrebi N, Holmes MJ (2012) Nonlinear adjustments of volatility expectations to forecast errors: evidence from Markov-regime switches in implied volatility. Rev Pac Basin Financ Mark Pol 15(3):1–23

    Article  Google Scholar 

  • Ostaszewski A, Gietzmann M (2008) Value creation with Dye’s disclosure option: optimal risk-shielding with an upper tailed disclosure strategy. Rev Quant Financ Acc 31(1):1–27

    Article  Google Scholar 

  • Pedersen CS, Satchell SE (2002) On the foundations of performance measures under asymmetric returns. Quant Finance 2(3):217–223

    Article  Google Scholar 

  • Penno MC (1997) Information quality and voluntary disclosure. Account Rev 72:275-284

    Google Scholar 

  • Rogers JL, Schrand CM, Verrecchia RE (2008) The impact of strategic disclosure on returns and return volatility: evidence from the ‘leverage effect’. Wharton School (working paper)

  • Searle SR, Casella G, McCulloch CE (1992) Variance components. Wiley, New York

    Book  Google Scholar 

  • Shin HS (2006) Disclosure Risk and Price Drift. J Account Res 44(2):351–379

    Article  Google Scholar 

  • Suijs J (2011) Asymmetric news and asset pricing (working paper)

  • Van Buskirk A (2012) Disclosure frequency and information asymmetry. Rev Quant Financ Account 38(4):411–440

    Google Scholar 

  • Varian HR (1992) Microeconomic analysis, 3rd edn. WW Norton, New York

    Google Scholar 

  • Verrecchia RE (1990) Information quality and discretionary disclosure. J Account Econ 12(4):365–380

    Article  Google Scholar 

  • Yermack D (1997) Good timing: CEO stock option awards and company news announcements. J Financ 52.2:449–476

    Article  Google Scholar 

  • Zhang L (2011) The impact of information quality uncertainty on forward looking disclosure (SSRN working paper)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adam J. Ostaszewski.

Appendices

Appendix 1: Isotonic transformations

We are concerned here with a general noisy observation T of the signal X in the form T = T(XY), where X and Y are independent random variables. Our interest here focuses on the regression function \({\mu_{X}(t)={\mathbb{E}}[X|T=t]}\) and on the random variable, which we call the estimator of X

$$ X^{\rm est}={{\mathbb{E}}}[X|T]=\mu_{X}(T), $$

which it is more convenient here to abbreviate to S.

We now re-derive the Dye equation basing the analysis on the estimated value of X (given T) in place of X itself.

In this new context the Dye equation holds for the estimator X est of the true valueFootnote 33, as follows:

Let L(x) be the inverse function of μ X (t). The entire analysis rests on the following simple observation:

$$ \hbox{Pr } [X^{\rm est}\leq x]=\hbox{Pr } [\mu_{X}(T)\leq x]=\hbox{Pr } [T\leq L(x)]=F_{T}(L(x)), $$

where F T is the probability distribution function of T. The argument identifies that F S (x): = F T (L(x)) is the probability distribution function of X est.

Theorem 8

(Isotonic Reduction Theorem) Let μ X (t) be the conditional expected value of of a random variable X given an observation t of T(XY). Suppose that μ X (t) is strictly increasing in t. Then the noisy signal cutoff \(\hat{t}\) (cutoff for the disclosure of a noisy signal) reduces to the clean cutoff for the estimator distribution F S with \({S:={\mathbb{E}}[X|T],}\) where \(\hat{s}=\mu_{X}(\hat{t})\) solves the Dye equation

$$ \frac{1-q}{q}(m_{X}-s)=H_{S}(s), $$
(32)

and where \({{\mathbb{E}}[S]={\mathbb{E}}[X]=m_{X}}\) is also the mean of the estimator distribution F S .

Comments

  1. 1.

    The Theorem assumes a minimum specification regarding how the random variable T is related to X, namely that μ X (t) is increasing in t. It may thus be applied both to an additive noise structure X + Y like that of Penno (1997), as well as our preferred multiplicative structure XY.

  2. 2.

    The proof of Theorem 8 reduces to implementing a simple substitution which enables the non-disclosure region to be described by a cutoff γ T on the observed noisy signal T and the valuation arising at the indifference point of μ T (γ T ). One refers to two facts. Firstly, that the mean of the F S distribution is m X ; this is immediate from the ‘conditional mean formula’ that

    $$ {{\mathbb{E}}}[S]={{\mathbb{E}}}[{{\mathbb{E}}}[X|T ]] ={{\mathbb{E}}}[X]=m_{X}, $$

    and secondly:

    $$ \hbox{Pr } [X^{\rm est}\leq x]=\hbox{Pr } [\mu_{X}(T)\leq x]=\hbox{Pr } [T\leq \mu_{X}^{-1}(x)]=F_{T}(\mu_{X}^{-1}(x)), $$

    where F T is the probability distribution function of T. (Note that F S (x): = F T (μ −1 X (x)) is the probability distribution function of S = X est.)

We will refer to (32) as the Dye equation adjusted for noise. We regard F S and its hemi-mean as a replacement for F X resulting from noise in the observed signal.

Appendix 2: Optimized odds

In this appendix we consider a general concave utility function U(yz) describing managerial preference over the risk-shielding loss, assessed as in (8) by z = m − γ when setting the cutoff at γ, and upward potential, assessed by y = H(γ). From it we derive equations identifying the unique pair \((\hat{y},\hat{z})\) which maximizes the utility function over the managerial opportunity set. The context includes, as one interpretation, the case m = m X and H = H X , where the random variable X models the terminal value of the firm, and, as another, the case m = m S and H = H S , where S is the estimator \( {\mathbb{E}}[X|T], \) i.e. a random variable which is some well-defined transform of a noisy observation of X (for which see Sect. 2.1).

The manager’s opportunity set is defined by:

$$ \Upomega :=\{(y,z):0\leq z\leq m,\qquad y=H(m-z)\}. $$

The set \(\Upomega \) is a strictly convex curve, since \(H^{\prime \prime}(\gamma)=f(\gamma)>0.\) Let the unique point of \(\Upomega \) which corresponds to utility optimization under U be \((\hat{y},\hat{z}).\) Subject to differentiability assumptions on U, the optimal utility contour \(U(y,z)=U(\hat{y},\hat{z})\) is tangential to \(\Upomega \) at \((\hat{y},\hat{z}).\) We analyze the significance of this observation.

In (yz) co-ordinates the Dye cutoff condition (4) for expected indifference between non-disclosure and disclosure is:

$$ (1-q)z=qy. $$

Corresponding to the optimal pair \((\hat{y},\hat{z})\) from the preceding equation we define \(\hat{\gamma}\) and \(\hat{q}\) (or \(\hat{\lambda}\)) by

$$ \hat{\lambda}=\frac{1-\hat{q}}{\hat{q}}:=\frac{\hat{y}}{\hat{z}}, \hbox{ and } \hat{\gamma}:=m-\hat{z}. $$

Thus we find that the common tangency of the optimal utility contour and the opportunity set (illustrated in Fig. 1) implies a corresponding choice \(\hat{\lambda}\) of optimal odds.

Fig. 1
figure 1

The arbitrage line (blue), the opportunity curve (red), and the tangential utility contour (green) (Color figure online)

Fig. 2
figure 2

The blue and green curves exhibit the strict dominance. The red curve represents (m − t)2 with m = 1 (Color figure online)

Our aim is to characterize the optimal odds. We derive a condition, which we shall describe henceforth as the optimal odds equation, as follows. Note that the tangent slope at \((\hat{y},\hat{z})\) along the optimal utility curve is given by the marginal rate of substitution −U z /U y . Put

$$ {{\mathcal{M}}}_{U}(y,z):=\frac{U_{z}(y,z)}{U_{y}(y,z)}. $$

But, since y = H(m − z), the slope of the opportunity curve is given by dy/dz, i.e. by \(y^{\prime}=-F(m-z).\) Common tangency thus requires that

$$ \left. \frac{d}{dz}H(m-z)\right\vert_{z=\hat{z}}= -\frac{U_{z}(\hat{y},\hat{z})}{U_{y}(\hat{y},\hat{z})}. $$

Since \(F(\hat{\gamma})=F(m-\hat{z}),\) we obtain \({F(\hat{\gamma})={\mathcal{M}}(\hat{\lambda}(m-\hat{\gamma}), m-\hat{\gamma}).}\) In summary, the model is fully endogenized by the choice of q via the pair of equations

$$ \begin{aligned} \hat{\lambda}(m-\hat{\gamma}) &=H(\hat{\gamma}),\\ {\mathcal{M}}_{U}(\hat{\lambda}(m-\hat{\gamma}),m-\hat{\gamma}) &=F(\hat{\gamma}). \end{aligned} $$

We may now specialize our discussion to utilities U(yz) for which the marginal rate of substitution is homogeneous of degree 0 in yz and so is a function of the ratio y/z, i.e. where \({{\mathcal{M}}_{U}(y,z)=\mathcal{M}_{U}(y/z,1).}\) The class of such utilities is natural to use and quite wide as it comprises the homothetic utilities and those that are themselves homogeneous of degree zero—see Forsund (1975). Put

$$ u(\lambda):={{\mathcal{M}}}_{U}(\lambda, 1). $$
(33)

If the U-contours are strictly concave functions of the form y = y(z), then u(λ) is increasing in λ. Similarly if the U-contours are strictly convex functions, then u(λ) is decreasing in λ. Specifically, in the Cobb-Douglas utility case U C-D(yz) = y α z β one has u(λ) =  κλ with κ = β/α. In the Constant Elasticity of Substitution case U CES(yz) = (α y δ + β z δ)−1/δ, one has u(\(\lambda)=\left(\kappa \lambda \right)^{1+\delta}\) with κ = (β/α)1/(1+δ) (so that δ = 0 reduces to the Cobb-Douglas case).

If u is strictly increasing/decreasing (as in the illustrative examples), then the fully endogenized model permits a separation of the variables and leads to the simplified equations:

$$ \begin{aligned} \hat{\lambda} &=H(\hat{\gamma})/(m-\hat{\gamma}),\\ \hat{\lambda} &=u^{-1}(F(\hat{\gamma})), \end{aligned} $$

since, the common tangency condition now reads \(u(\hat{\lambda})=F(\hat{\gamma}).\)

Appendix 3: The structurally minimal model

We show that the implied utility of Sect. 2.3 for the structurally minimal model is U(z,y) = (z −1 + y −1)−1.

To prove this, suppose as in Appendix 2 that the manager chooses among the points in the opportunity set defined by:

$$ \{(y,z):0\leq z\leq m,\qquad y=H(m-z)\}, $$

employing a general differentiable utility function U(yz). In reduced form the Dye cutoff condition (4) for expected indifference between non-disclosure and disclosure is, as before:

$$ (1-q)z=qy\quad \hbox{ or }\quad (1-q)(y+z)=y, $$
(34)

which yields

$$ q=q(y,z)=\frac{z}{y+z}. $$
(35)

Note that q(yz) is homogeneous of degree zero. So the manager’s maximization objective is now \((1-q)\cdot z,\) hence eliminating q via (34)

$$ U(y,z)=(1-q)\cdot z=\frac{yz}{y+z}=(y^{-1}+z^{-1})^{-1}. $$

Compare Caplin and Nalebuff (1991), who consider generalized averages such as this harmonic one. Note also that the contour U = c is a pair of rectangular hyperbolae with centre of symmetry at y = z = c, and is expressible as

$$ (y-c)(z-c)=c^{2}. $$

Appendix 4: First monotonicity theorem

In this section we prove the Theorem 2 (First Monotonicity Theorem) of Sect. 2.3.

Assertion (i) is immediate from equation

$$ \tau (\lambda)=\frac{1-u(\lambda)}{1+\lambda}. $$
(36)

since q = 1/(1 + λ).

As to assertion (ii), we note that

$$ \begin{aligned} \tau^{\prime} &=\frac{-u^{\prime}(\lambda)(1+\lambda)-(1-u(\lambda))}{ (1+\lambda)^{2}} \\ &=\frac{[u(\lambda)-\lambda u^{\prime}(\lambda)]-u^{\prime}(\lambda)-1}{(1+\lambda)^{2}}=\frac{\varphi (\lambda)}{(1+\lambda)^{2}},\hbox{ say}. \end{aligned} $$

By assumption \(\varphi (0)<0.\) Now

$$ \varphi^{\prime}(\lambda)=-(\lambda +1)u^{\prime \prime}(\lambda)<0 \hbox{ for }u(\lambda)\hbox{ convex.} $$

So \(\varphi^{\prime}(\lambda)<0\) i.e. \(\varphi (\lambda)\) is decreasing for λ > 0. So \(\varphi (\lambda)\leq \varphi (0)<0\) and hence \(\tau^{\prime}(\lambda)<0\) for all λ > 0.

(iii) Here \(\varphi (\bar{\lambda})<0\) and this time \(\varphi (\lambda)\) is increasing so \(\varphi (\lambda)\leq \varphi (\bar{\lambda})<0\) for all \(0<\lambda <\bar{\lambda}\). Here again \(\tau^{\prime}(\lambda)<0\) for all \(0<\lambda <\bar{\lambda}\).

Appendix 5: Technicalities on stochastic dominance

Recall from Appendices 2 and 3 [cf. Eq. (33)] that \(u(\lambda):=\mathcal{M}_{U}(\lambda, 1)=U_{z}(\lambda, 1)/U_{y}(\lambda, 1)\) is the marginal rate of substitution for the utility U(zy), assumed to be either homothetic or homogeneous of degree 0 in (yz). We assume that u(λ) is increasing in λ and so has an inverse. Throughout we regard u as fixed. We will need to assume below that u(λ) varies regularly enough.

Definition

Put

$$ G^{u}(t),\hbox{ or}\,G(t):=\frac{u^{-1}(F(t))}{H(t)} $$
(37)

and say that the distribution F 1 increasingly dominates F 2 if m 2 ≤ m 1 and

$$ G_{1}(t)\geq G_{2}(t),\hbox{ for all }t\hbox{ with }0<t\leq m_{2}, $$

and also H 1(m 2) < H 2(m 2).

Our first result shows that in cases of interest G(t), as defined above, is a decreasing function of t. The regularity condition quoted below is connected to the theory of regular variation, for which see Bingham et al. (1987), subsection 1.8.

Proposition 1

For H(t) log-concave, u(λ) strictly increasing and satisfying the regularity condition

$$ \frac{u(\lambda)}{\lambda u^{\prime}(\lambda)}\leq 1, $$
(38)

in particular for u(λ) = λδ with δ ≥ 1, the function G u is decreasing.

Proof

For t = u(λ) with u strictly increasing put v = u −1 so that λ = v(t) and \(u^{\prime}(\lambda)=1/v^{\prime}(t) \) and note that

$$ \frac{v(t)}{tv^{\prime}(t)}=\frac{\lambda u^{\prime}(\lambda)}{u(\lambda)}\geq 1. $$
(39)

But

$$ \frac{d}{dt}\left(\frac{v(F(t))}{H(t)}\right) =\frac{v^{\prime}(F)fH-v(F)F}{H^{2}}=\frac{v^{\prime}(F)}{H^{2}} \left(Hf-F^{2}\frac{v(F)}{Fv^{\prime}(F)}\right), $$

so in view of (40) and the growth condition (38) via (39)

$$ \frac{d}{dt}\left(G^{u}(t)\right) \leq \frac{v^{\prime}(F)}{H^{2}}\left(Hf-F^{2}\right) <0. $$

Remark

When u(λ) = λ2 the increasing dominance condition reduces to the convexity of H(t)−1—i.e. the ‘reciprocal convexity’ of H(t). Indeed u −1(t) = t 1/2, yields

$$ G(t)^{2}=\frac{F(t)}{H(t)^{2}}=-\frac{d}{dt}(H(t))^{-1}. $$

Here G is decreasing iff G 2 is decreasing iff H(t)−1 is convex and so −H(t)−1 is concave.

Recall the definition of ρ-concavity of a function g, due to Caplin and Nalebuff (1991), which includes log-concavity when ρ = 0 (i.e. log g is concave), and signifies that for ρ > 0 the function g ρ is concave, whereas for ρ < 0 the function (− g ρ) is concave. We note that if g is ρ-concave then it is also \(\rho^{\prime}\) -concave for all \(\rho^{\prime}<\rho.\) In general if H is log-concave, then it is (−1)-concave.

Note that increasing dominance is a restricted form of dominance. Taking now u(t) = t, the mean-zero normal family directed by σ exhibits first order dominance to the left of the mean, as σ 1 < σ 2 implies that for x < 0

$$ \frac{x}{\sigma_{1}}<\frac{x}{\sigma_{2}}\hbox{ and so}\,F_{\rm N}(x;0,\sigma_{1})=F_{\rm N}(\frac{x} {\sigma_{1}};0,1)<F_{\rm N} (\frac{x}{\sigma_{2}};0,1)=F_{\rm N}(x;0,\sigma_{2}), $$

where F N(t;μσ) denotes the normal distribution function (with mean and standard deviation μσ). We recall that a twice differentiable function g(t) is log-concave if

$$ g^{\prime \prime}g-(g^{\prime})^{2}<0. $$
(40)

(See e.g. Bergstrom and Bagnoli (2005), An (1998), or Caplin and Nalebuff (1991).)

We have just seen that on (0,m) if H(t) is log-concave, then G u(t) is decreasing, so the condition that G u(t) be decreasing is a weakening of log-concavity.

Our next result shows that the assumption of a decreasing G u(t) represents only a mild form of regularity and does not need even as much as log-concavity.

Proposition 2.

Suppose the density function is such that the limit \(f(0+):=\lim_{t\searrow 0}f(t)\) exists, then assuming the following limit exists we have:

$$ \lim_{t\searrow 0}\frac{f/F}{F/H}\leq 1. $$

If further \(f^{\prime}(0+)\) exists, then we have in fact

$$ \lim_{t\searrow 0}\frac{f/F}{F/H}=1. $$

Consequently, under the regularity condition (38), the function

$$ G(t)=\frac{u^{-1}(F(t))}{H(t)} $$

is decreasing in an interval to the right of the origin.

We recall and prove Theorem 3 from Sect. 3.2.

Theorem 3

Given two distributions, with log-concave hemi-means, with F 1 increasingly dominating F 2, the corresponding optimized Dye triggers satisfy

$$ \hat{\gamma}_{1}>\hat{\gamma}_{2}. $$

Proof

The optimality condition requires \(\gamma_{F}=\gamma (\hat{q})\) to satisfy

$$ F_{i}(\gamma)=u(\hat{\lambda})=u\left(\frac{H_{i}(\gamma)}{m_{i}-\gamma} \right), $$

or, with v = u −1,

$$ \frac{1}{m_{i}-\gamma}=v(F_{i}(\gamma))/H_{i}(\gamma). $$

or simply

$$ G_{i}^{v}(\gamma)=\frac{1}{m_{i}-\gamma}. $$

Recalling that m 2 < m 1, we conclude, for 0 < t < m 1, that

$$ \frac{1}{m_{2}-t}>\frac{1}{m_{1}-t}. $$

The function 1/(m 1 − t) is strictly increasing, so intersects the graph of G 2(t) earlier than it intersects the graph of G 1(t). The intersection of G 1 with 1/(m 1 − t) is in turn later than its intersection with 1/(m 2 − t). The result is now clear. See the illustration in Fig. 2.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gietzmann, M.B., Ostaszewski, A.J. Why managers with low forecast precision select high disclosure intensity: an equilibrium analysis. Rev Quant Finan Acc 43, 121–153 (2014). https://doi.org/10.1007/s11156-013-0367-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11156-013-0367-7

Keywords

JEL Classification

Navigation