Document Text (Pages 31-40) Back to Document

The influence of commercial capabilities and orientations in new technology ventures

by Graaff, Joost Adriaan, MS

Page 31

Narver and Slater (1990). Shared interpretation was deducted from Hult et al., 2005. The
measurement of pro-active SO was done according to van der Borgh et al. (2010). Finally, a
subjective business performance measure was adapted from Moorman (1995), with addition of
items generated in open interviews of Witte (2012). The measurement of level of technology push
of Witte (2012) has been extended with two additional items in order to increase reliability of the
construct. Finally, in addition to measuring the latent constructs the survey was extended with
several general questions in order to characterize the sample and used as control variables.

4.3 Sample characteristics

All respondents were part of the team when the start-up was founded. The function within the startup
is mainly CEO or CTO. Of those respondents, 60% had previous entrepreneurial experience. The
average age of the respondent was 39 years. The larger part (88%) of the start-ups were supported
by an incubator. This is due to the fact that the majority of the start-ups were found trough the
websites of Dutch incubators. Figure 4 shows that 41% of the start-ups deliver a product and 10%
deliver a service. Almost half (49%) of the start-ups were involved in both a product and a service.
The market the start-ups serve is mostly B2B (85%), as illustrated in Figure 5. This type of market and
innovation is characteristic for the technology driven mind-set of the start-ups.

5% Both



Figure 4 Product or Service


Figure 5 Market type

4.4 Data analysis

A Partial Least Squares (PLS) approach to structural equation modelling (SEM) was used to analyse
the data. We followed Chin’s (1998) recommendation to use bootstrapping (with 500 runs) as the
resampling procedure. SmartPLS 2.0 software was used to run the analyses.


Page 32

There are two popular approaches for SEM: covariance based SEM (CBSEM) and PLS is more
appropriate for small sample sizes in contrast to CBSEM that requires hundred or more observations
(Henseler et al., 2009). PLS has been performed with sample sizes as low as 50 or even less
observations (Haenlein and Kaplan, 2004). PLS is furthermore non-parametric in nature and does not
require the data to be normally distributed.

4.5 Measure reliability and validity

To assess reliability, item reliability and internal consistency reliability is examined. For the
assessment of validity, the convergent validity and the discriminant validity is assessed.
4.5.1 Internal consistency reliability
The traditional criterion for internal consistency is Cronbach’s alpha, which provides an estimate for
the reliability based on the indicator intercorrelations. However, PLS assumes that all indicators are
equally reliable, PLS prioritizes indicators according to their reliability, resulting in a more reliable
composite (Hulland, 1999). As Cronbach’s alpha tends to provide a severe underestimation of the
internal consistency reliability of latent variables in PLS path models, it is more appropriate to apply
a different measure, the composite reliability (Henseler et al., 2009). The composite reliability takes
into account that indicators have different loadings. An internal consistency reliability value above
0.7 in early stages of research and values above 0.8 or 0.9 in more advanced stages of research are
regarded as satisfactory (Nunnally and Bernstein, 1991), whereas a value below 0.6 indicates a lack
of reliability. Table 1 shows that the Cronbach’s alpha of the construct ‘pro-active customer
orientation’ is below 0.6. The composite reliability, which is more appropriate for PLS path models, is
however above the value 0.7 indicating a sufficient internal consistency reliability.
4.5.2 Item reliability
The item reliability assesses the internal consistency of the items in the measures. In PLS, the item
reliability can be measured by the factor loadings. Table 2 shows the measured items with their
factor loadings. Studies state that 50% of the variance of the indicator is due to the construct .
Therefore, outer loadings should be above 0.7. However, in PLS one should be careful whether an
indicator should be eliminated. Only if an indicator’s reliability is low and eliminating this indicator
goes along with a substantial increase of composite reliability, it makes sense to discard this
indicator (Henseler et al., 2009).
With the above in mind, this study maintains the threshold of 0.7 or higher for the outer loadings,
with few exceptions to secure a sufficient level of internal consistency. This led to the deletion of
the items: BP4, BP5, MC3, SC3, CoO2, CuO1, CuO4, SO2, ShI2.


Page 33

4.5.3 Convergent validity
Convergent validity signifies that the indicators represents one and the same underlying construct.
We examine the convergent validity, as proposed by Fornell and Larcker (1981), with the average
variance extracted (AVE). An AVE value of 0.5 or higher indicates that a latent variable is able to
explain more than half of the variance of its indicators on average. Table 1 shows that all AVE values
are practically above the 0.5 threshold.
4.5.4 Discriminant validity
The discriminant validity measures whether a given construct differ from measures of other
constructs in the same model. We measure the discriminant validity with the use of the Fornell-
Larcker criterion and the cross-loadings of the items. First, Fornell and Larcker (1981) propose that a
latent variable should share more variance with its assigned indicators than with any other latent
variable. In statistical terms, the AVE of each latent variable should be greater than the latent
variable’s highest squared correlation with any other latent variable. Table 1 shows the square root
AVE values and the correlations between the latent variables. It can be seen that the correlations do
not exceed the square root AVE values and thus the discriminant validity is secured according to the
Fornell-Larcker criterion.
Next, cross-loadings of the items are checked (see Table 6, Appendix A). The loadings of each
indicator should not be higher than any of its cross-loadings. Table 6 shows that this is not the case.
Furthermore, according to Hair (2009), cross-loadings should not exceed 0.6. The results of Table 6
show that no items exceed that value and therefore no items are deleted based on this criterion.


Page 34

Table 1 Descriptives, reliability, validity and construct interrelations

Variables Mean (S.D) Cronbrach's
AVE Square root
X1 X2 X3 X4 X5 X6 X7 X8 X9

X1 Age 39.72 (9.27) N/A N/A N/A N/A
X2 B2B 0.85 (0.36) N/A N/A N/A N/A 0.20
X3 Business performance 2.93 (0.92) 0.852 0.911 0.77 0.88 -0.05 -0.03
X4 Commercial capabilities 3.16 (1.30) 0.877 0.914 0.73 0.85 0.33 -0.08 0.27
X5 Competitor orientation 3.42 (0,89) 0.654 0.806 0.58 0.76 0.03 0.08 0.47 0.26
X6 Pro-active customer orientation 4.09 (0.65) 0.521 0.736 0.49 0.70 0.04 0.16 0.18 0.09 0.27
X7 Pro-active sales orientation 3.59 (1.03) 0.813 0.888 0.72 0.85 0.08 0.02 0.15 0.32 0.25 0.28
X8 Shared Interpretation 4.03 (0,73) 0.662 0.802 0.59 0.77 0.18 0.05 0.17 0.23 0.35 0.47 0.32
X9 Tech push 3.9 (1.01) 0.619 N/A N/A N/A -0.08 -0.06 0.27 -0.03 0.18 -0.14 0.05 0.05


Page 35

Table 2 Constructs, items and survey questions
Construct Item Survey questions Factor loadings
Business performance How did the organization perform, relative to…
Moorman (1995) BP1* Return on investment objectives? 0.727
BP2* Sales and customer growth objectives? 0.859
BP3* Market share objectives? 0.845
BP4 Innovation reputation objectives? 0.602
BP5 Planned value creation objectives? 0.605
Commercial capabilities In our organization, myself or one or more of my colleagues had...
Pitkänen et al. (2012) MC1* Work experience in advertising and promotion. 0.785
MC2* Experience in dividing the market into customer segments. 0.802
MC3 Academic studies in marketing. 0.810
SC1* Work experience in selling at the customer interface. 0.724
SC2* Experience in managing sales team/function. 0.640
SC3 Academic studies in selling. 0.785
Competitor orientation In our organization, we…
Narver and Slater (1990) CoO1* Exactly knew who are competitors were. 0.703
CoO2 Monitored new developments of our competitors. 0.707
CoO3* Did not know what attracted customers to competitors. 0.755
CoO4* Knew if our competitors' customers were satisfied. 0.771
Pro-active customer In our organization, we…
Blocker et al. (2011) CuO1 Continuously tried to discover additional needs of our customers of which they were unaware. 0.626
and Narver et al. (2004) CuO2* Frequently brainstormed on how customers will use our technology. 0.739
CuO3* Incorporated solutions to unarticulated customer requirements. 0.637
CuO4 Identified key market trends to gain insights into what users require in the future. 0.167
CuO5* Looked for clues beyond the requirements expressed by customers to identify their requirement drivers. 0.647


Page 36

Pro-active sales In our organization we put in a lot of time and energy into…
van der Borgh et al. (2010) SO1* Actual sales work of products/services to the potential customers. 0.847
SO2 The development of sales arguments for the product/service. 0.631
SO3* Experimenting with selling tactics with the potential customers. 0.833
SO4* Creating and identifying sales opportunities in the market. 0.866
Shared Interpretation In our organization, we….
Hult et al., 2005 ShI1* Jointly developed a shared understanding of the available market information. 0.562
ShI2 Formally met to discuss information regarding markets, customers and competitors. 0.442
ShI3* Jointly developed a shared understanding of the implications of market developments. 0.894
ShI4* Frequently met informally and discussed information regarding markets, customers and competitors. 0.823
Technology push TP1* Technological possibilities provided the driving force for the development of the project 0.951
TP2* Our product was driven by new technology opportunities. 0.649
TP3 Our product-technology combination was really new for the market. -0.060
* Included in SEM analysis based on factor loading and composite reliability


Page 37

4.1.1 ‘Technology push’ construct
The ‘technology push’ construct was originally measured with one item by Witte (2012). The
important role of the level of technology push demanded a better measurement and the construct is
therefore extended with two extra survey questions (see Table 3).

Table 3 Items 'technology push'

Items Survey questions
TP1 Technological possibilities provided the driving force for the development of the project
TP2* Our product was driven by new technology opportunities.
TP3* Our product-technology combination was really new for the market.
Notes: * Newly added items

A Confirmatory Factor Analysis with Varimax rotation and a reliability analysis is executed in IBM
SPSS statistics 20 in order to test whether the items related to one construct. The results of the CFA
identified two components, shown in Table 4 with their item correlations.

Table 4 Rotated Component Matrix
Component 1 Component 2
TP1 .951 -.066
TP2 .649 .639
TP3 -.060 .953

Table 4 shows that the ‘technology push’ construct can be extended with the item TP2, maintaining
a threshold of 0.6 or higher. Although TP3 does meet the criterion regarding component 2, it does
not with relate to the originally measured item TP1. Therefore, TP3 is excluded. A reliability analysis
of the items TP1 and TP2 resulted into an acceptable level of Cronbach’s Alpha (0.619). Finally, an
average of the items TP1 and TP2 is calculated and added to the original data of Witte (2012).


Page 38

5 Results

With the use of structural equation modelling (SEM), four models test the hypotheses. Assessing
multiple models avoids distorting results due to the many interactions present.
To assess Hypotheses 1, 2, 3 and 4, a main effects model is estimated (Model 1) with direct paths to
business performance. To test Hypotheses 5, 6 and 7 we estimated Model 2, which includes all the
interaction effects between commercial capabilities and orientations on the business performance.
Model 3 shows the interactional effects of Hypotheses 8, 9 and 10 with technology push as a
moderator on the relation between the commercial orientations and the business performance. To
assess Hypotheses 11, 12 and 13 the moderating effect of a shared interpretation on the
relationship between the commercial orientations and the business performance is included in
Model 4. The results of the SEM analyses are presented in Table 5.
In support of H1, a significant direct positive effect of founder's commercial capabilities on the
business performance can be seen in all the estimated models. A pro-active customer orientation
on the business performance does not directly affect the business performance on a significant level
in any of the models, lacking support for H2. A pro-active SO does not influence the business
performance significantly in any of the models, thereby rejecting H3. Support is found for H4, linking
a competitor orientation directly to the business performance on a significant level in all models.
In Model 2, the interaction of commercial capabilities and pro-active customer orientation is linked
to business performance significantly in a negative direction, not supporting H5. No interactional
effect of commercial capabilities and pro-active SO on the business performance can be seen,
lacking support for H6. However, the interaction effect of commercial capabilities and a competitor
orientation is significantly positive, confirming H7.
The results of Model 3 show a significant negative moderating effect of shared interpretation on the
relationship between pro-active customer orientation and the business performance. The effect of a
pro-active SO on the business performance is negatively affected by a shared interpretation as well.
It can be seen that both path coefficients are in the opposite of the hypothesized direction. Thus no
support of H8 and H9 can be found. No significant moderating effect can be found of a shared
interpretation on the relationship between a competitor orientation on the business performance,
rejecting H10.


Page 39

Table 5 Overview results SEM models
Model 1 R2=0.31 Model 2 R2= 0.43 Model 3 R2 = 0.47 Model 4 R2 = 0.41
Main effects Interaction Capabilities
& Orientations
Interaction Orientations
& Shared interpretation
Interactions Orientations
& Technology push
Paths modelled Coefficient t-value Coefficient t-value Coefficient t-value Coefficient t-value
Commercial capabilities Business performance 0.22** 2.04 0.23** 2.14 0.16* 1.65 0.19* 1.76
Pro-active customer orientation Business performance 0.14 1.13 0.07 0.69 0.11 1.02 0.14 1.11
Pro-active sales orientation Business performance -0.03 0.38 -0.07 0.29 0.03 0.35 -0.08 0.60
Competitor orientation Business performance 0.37*** 3.10 0.35*** 2.83 0.34*** 2.71 0.38** 2.56
Shared Interpretation Business performance -0.06 0.68 0.05 0.54 -0.02 0.14 0.02 0.18
Technology push Business performance 0.22** 2.03 0.15 1.45 0.20* 1.91 0.13 1.28
B2B Business performance -0.02 0.27 0.02 0.27 0.04 0.27 0.01 0.02
Age of founder Business performance -0.10 1.09 -0.09 1.03 -0.07 0.83 -0.01 1.09

Commercial capabilities X
Pro-active customer orientation
Commercial capabilities X

Pro-active sales orientation
Commercial capabilities X

Competitor orientation

Business performance -0.31*** 2.75

Business performance 0.11 0.97
Business performance 0.23** 2.00

Pro-active customer orientation X

Shared interpretation
Pro-active sales orientation X

Shared interpretation
Competitor orientation X

Shared interpretation

Business performance -0.23* 1.93
Business performance -0.23* 1.87
Business performance 0.17 1.24


Page 40

Pro-active customer orientation X
Technology push
Business performance 0.15 1.28
Pro-active sales orientation X

Technology push
Business performance -0.05 0.51
Competitor orientation X Business performance 0.26** 2.15

Technology push
Notes: significance levels ***p<0.01, **p<0.05, *p<0.10


© 2009 All Rights Reserved.