Performance effects of formal modeling language differences : A combined abstraction level and construct complexity analysis

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

9 Scopus Citations
View graph of relations

Author(s)

  • Hock-Hai Teo
  • Hock Chuan Chan
  • Kwok Kee Wei

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)160-175
Journal / PublicationIEEE Transactions on Professional Communication
Volume49
Issue number2
Publication statusPublished - Jun 2006

Abstract

Understanding data-modeling performance can provide valuable lessons for the selection, training, research, and development of data models. Data modeling is the process of transforming expressions in loose natural language communications into formal diagrammatic or tabular expressions. While researchers generally agree that abstraction levels can be used to explain general performance differences across models, empirical studies have reported many construct level results that cannot be explained. To explore further explanations, we develop a set of model-specific construct complexity values based on both theoretical and empirical support from complexity research in databases and other areas. We find that abstraction levels and complexity values together are capable of providing a consistent explanation of laboratory experiment data. In our experiment, data were drawn from three models: the relational model, the extended-entity-relationship model, and the object-oriented model. With the newly developed complexity measures, a consistent explanation can be made for findings from other studies which provide sufficient model details for complexity values to be calculated. © 2006 IEEE.

Research Area(s)

  • Construct complexity, Data model, Human factors, Modeling performance

Citation Format(s)