Skip to main content

Part of the book series: Computational Imaging and Vision ((CIVI,volume 17))

  • 209 Accesses

Abstract

Computer vision as any other scientific field can be divided into theoretical and experimental parts. A set of reasoned ideas proposed in the theoretical part is verified by experiments. Furthermore, discernible discrepancies between theory and real-world facts and events are discovered during experiments. Thus, both theoretical and experimental parts are coupled creating a spiral on which a theoretical model is improved in the sense that discrepancies become smaller and smaller. The discrepancies are often measured by differences between theoretical and measured quantities if both are well established as they are in physics. However, we are still searching for such simple quantities in a number of fields including computer vision. The traditional solution is to define a set of performance indices and interpretation rules. From this point of view, it sounds rather curiously that performance evaluation of computer vision algorithms measuring discrepancies is not wide supported and must be still justified (Forstner, 1996).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Allan, A.L. (1993) Practical Surveying and Computations,Butterworth-Heinemann Ltd, Oxford OX2 8DP, second edition.

    Google Scholar 

  • Clifford, A.A. (1973) Multivariate Error Analysis, Applied Science Publishers Ltd, Ripple Road, Barking Essex, England.

    Google Scholar 

  • Forstner, W. (1996) 10 pros and cons against performance characterization of vision algorithms, Christensen, H.I., Forstner, W. and Madsen, C.B. (eds.), Workshop on Performance Characteristics of Vision Algorithms, Proceedings,April 19, Cambridge, UK, 13–29, http://www.vision.auc.dk/-hic/performance-ws.html. Sponsored by European Network of Excellence in Computer Vision, http://afrodite.dist.unige.it.

    Google Scholar 

  • Franceschini, F. and Rossetto, S. (1997) Design for quality: Selecting a products technical features. Quality Engineering, 9 (4): 681–688.

    Article  Google Scholar 

  • Garvin, D.A. (1996) Competing on the eight dimensions of quality, IEEE Engineering Managament Review, pages 15–23, Spring 1996.

    Google Scholar 

  • Hall, T. and Wilson, D. (1997) Views of software quality: a field report, IEE Proceedings Software Engineering, 144 (2): 111–118.

    Article  Google Scholar 

  • Haralick, R.M. (1994) Propagating covariance in computer vision, 12th International Conference on Pattern Recognition (Jerusalem, Israel), volume I, IEEE Computer Society Press, Washington, DC, 493–498.

    Google Scholar 

  • Heath, M.D., Sarkar, S., Sanocki, T. and Bowyer, K.W. (1997) A robust visual method for assessing the relative performance of edge-detection algorithms, IEEE PA MI, 19 (12): 1338–1369.

    Article  Google Scholar 

  • Khattree R. (1996) Robust parameter design: A response surface approach, Journal of Quality Technology, 28 (2): 187–198.

    Google Scholar 

  • Kolarik, W.J. (1995) Creating Quality: Concepts, Systems, Strategies, and Tools, McGraw-Hill, Inc.

    Google Scholar 

  • Logothetis, N. and Wynn, H.P. (1989) Quality through Design, Experimental Design, Off-line Quality Control and Taguchi’s Contributions, Clarendon Press, Oxford.

    Google Scholar 

  • Montgomery, D.C. (1991) Design and Analysis of Experiments,John Wiley and Sons, third edition.

    Google Scholar 

  • Morisio, M. and Tsoukias, A. (1997) IusWare: a methodology for the evaluation and selection of software products, IEE Proceedings Software Engineering, 144 (3): 162–174.

    Article  Google Scholar 

  • Nedialkov, N.S. (1994) Precision control and exception handling in scientific computing, Master’s thesis, University of Toronto, Department of Computer Science.

    Google Scholar 

  • Phadke, M.S. (1989) Quality Engineering using Robust Design, Prentice-Hall International, Inc.

    Google Scholar 

  • Pyle, I. (1996a) Performance considerations in COMPLEMENT, IEEE Symposium and Workshop on Engineering of Computer-Based Systems, IEEE Computer Society Press, 206–213.

    Google Scholar 

  • Pyle, I. (1996b) Quality in software based systems, IEEE Symposium and Workshop on Engineering of Computer-Based Systems, IEEE Computer Society Press, 214–218.

    Google Scholar 

  • Ross, P.J. (1988) Taguchi Techniques for Quality Engineering, Loss Function, Orthogonal Experiments, Parameter and Tolerance Design,McGraw-Hill Book Company.

    Google Scholar 

  • Taguchi, G. (1986) Introduction to Quality Engineering,Asian Productivity Organization, 4–14, Akasaka 8-chome, Minato-ku, Tokyo 107, Japan.

    Google Scholar 

  • Taguchi, G., Elsayed, E.A. and Hsiang, T.C. (1989) Quality Engineering in Production Systems,McGraw-Hill Book Company.

    Google Scholar 

  • Trivedi, K.S., Ciardo, G., Malhotra, M. and Sahner, R.A. (1993) Dependability and Perfomability Analysis, Donatiello, L. and Nelson, R. (eds.), Performance Evaluation of Computer and Communication Systems, Joint Tutorial Papers of Performance ‘83 and Sigmetrics ‘83, Springer-Verlag, 587–612.

    Google Scholar 

  • Woodside, C.M. (1993) Performance Engineering of Client-Server Systems, Donatiello, L. and Nelson, R. (eds.), Performance Evaluation of Computer and Communication Systems, Joint Tutorial Papers of Performance ‘83 and Sigmetrics ‘83, Springer-Verlag, 394–410.

    Google Scholar 

  • Zhang, Y.J. (1996) A survey on evaluation methods for image segmentation, PR, 29 (8): 1335–1346.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Mařík, R. (2000). Quality in Computer Vision. In: Klette, R., Stiehl, H.S., Viergever, M.A., Vincken, K.L. (eds) Performance Characterization in Computer Vision. Computational Imaging and Vision, vol 17. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-9538-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-94-015-9538-4_4

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-5487-6

  • Online ISBN: 978-94-015-9538-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics