Issues related to the enhancement of innovativeness are of pivotal importance when attempting to increase the competitive advantage of enterprises and their distinctive competencies, thus fostering economic efficiency and growth of countries. Numerous studies (e.g. , [40], [14]) highlight the importance of innovation for economic development and well-being. The ability of governments, businesses, and individuals to identify, respond to, and most importantly, introduce change, as the bedrock of competitive ability (e.g., [34], [5], [37]). At a microeconomic level, a practitioner’s viewpoint prevails. Continuous improvement in technology and business processes are recognized as being vital to economic prosperity, thereby providing a strong incentive to invest in innovation [9, pp.133-140], [19, p.16]. Yet, it still remains unclear whether countries are innovative because they are rich, or rich because they are innovative. Examination of the National Innovation System (NIS) can be regarded as a starting point to the discussion of innovativeness. It is widely accepted that results of studies dealing with innovativeness, and in particular with efficiency of pro-innovative programs and NIS, are critical to the formulation of improvement initiatives. Such endeavours gain even more attention because of globalization, and in particular, the unification of the European Union (EU).

This paper is organized into the following parts. First, the role of National Innovation Systems is described along with the limitations of this idea stemming mainly from imprecision of definitions. Next, constraints imposed by indicators of innovativeness that form composite indexes, and questions regarding methodology of forming them such indexes, are outlined. Objections regarding the European Innovation Scoreboard are mentioned. The next section is devoted to suggestions for studies relative to assessment of technical efficiency of innovation systems. In the last part a summary of the paper is provided.

NATIONAL INNOVATION SYSTEM (NIS): A MEANS TO ORGANIZE, STIMULATE AND CONTROL PRO-INNOVATIVE ACTIVITIES.

NIS and its Discussion Themes

At the macroeconomic level, questions of innovativeness are discussed from the perspective of National Innovation Systems (NIS), defined as a “network of agents and set policies and insti-tutions that affect the introduction of technology that is new to economy” [10, p.541], are critical for economic betterment (e.g., [12], [18], [33], [37]) and represent an important component of economic performance (e.g., [32], [44]).

Descriptions of NIS are easily available, yet there is a scarcity of methods to operationalize NIS and its components. Consequently, interrelationships between various elements that form NIS cannot be fully, quantitatively examined. The same is true when the impact of the context of operation upon the design of NIS is analysed. Some frequently used policy themes pertaining to investigation of NIS include (see also [3]), yet are not limited to:
1. Governmental policies (including promotion of innovation friendly environment) and NIS governance.
2. Commercialization of public research.
3. Development of Human Resource for innovations.
4. Funding Innovations.
Generally, these topics are consistent with the Lisbon Strategy [31] that is endorsed as a guide to scientific development of the European Union. It should be noted, the elements in thematic areas overlap, making reports on NIS at times redundant in terms of information content. Conclusions from such studies are difficult to quantify, as it is difficult to identify which solutions are correct, efficient and effective within the specific context of operations. Consequently, the idea of NIS, though very stimulating, is an abstract one, with little possibility of being translated into the language of daily business.

Innovation: Definition Related Dilemmas

One of the problems relevant to research on innovativeness is the difficulty to establish a precise definition of innovations. There seems to be agreement on considering innovation to be a novelty applied to something that already exists. The disagreement arises as to whether the change should be new to the market in general or only to a particular company (e.g., [48]). The former, denoted for the purposes of this discussion as the Frascati [17] approach suggests that innovation is rooted in notion of novelty in global terms. These novelties are assessed indirectly by the level of various educational attainment statistics [12], [28], R&D expenditures [12], and patent counts [30] [23]. The latter, the Oslo Manual [38] approach, takes a more micro-economic perspective. It deals primarily with implementation and adaptation of solutions, and is oriented towards a practitioner’s viewpoint. This approach conceptualizes innovation as an application for commercial purposes.

Results of literature analysis related to innovations, persistently suggest that even though dis-cussion is about similar phenomena, there is a gap between macro and micro perspectives of innovativeness. Some differences between macro and micro perspectives are summarized in Table 1.

Composite indexes are used in a variety of economic performance and policy areas. Such indexes integrate large amounts of information into easily understood formats and can be manipulated to produce desired outcomes. Despite this, there are several methodological problems regarding the creation of composite indexes [41]. Problems arise when examining the accuracy and reliability of these indexes. Quandaries regarding missing data, along with the question of index sensitivity to the weighing of indicators and their aggregation are inevitable [20, p. 5]. Composite indexes measure complex, dynamic systems and “many of their properties emerge from interactions among the entities in them” [29, p. 893]. Additional dilemmas become evident with the conceptual dichotomy between allocative efficiency (“Are we doing the right things?”) versus technical efficiency (“Are we doing things the right way?”) and these are investigated. Moreover, there are ambiguous definitions and the various taxonomies used to measure the consequences of achieved output (e.g., [43]).

Despite these problems, there has been a proliferation of composite indexes (e.g., [2, pp.179-181]. With several notable problems, the need and/or usefulness of such diversity among indexes is questionable. Despite being created with different intentions, and using varying series’ for calculations, these composite indexes actually produce similar results when ranking countries [36]. This occurs irrespective of whether they intend to measure the level of innovative capability, competitiveness, productivity, level of wealth, or standard of living.

It is important to note that several items from the composite indexes that may relate to the notion of innovativeness, deal primarily with inventiveness (e.g., on the Input side – expenditures on R&D and S&E graduates, or on the Output side – patents and trademarks, or “new-products”). Thus, these indicators lean more towards the Frascati interpretation of innovations (hence inventions) [17, para.21 and 63]. This marks a divergence from innovations as interpreted by Oslo Manual [32, para.131]. Consequently, it is arguable as to whether common composite innovativeness indexes such as European Innovation Scoreboard – EIS [13], World Competitiveness Yearbook – WCY [49], the Growth Competitiveness Report – GCR [29] [30], the Knowledge Assessment Methodology – KAM [28], [6], and Human Development Index – HDI [25] serve the needs of practitioners interested in the improvement of economic prosperity at a “shop floor level” [9], [26], or if they are primarily a policy formulation aid at the macroeconomic level.

Composite indexes typically use a variety of indicators (data series) to measure innovativeness. In such a way, they indicate which items of economic performance may contribute to the en-hancement of innovativeness. This may provide policy formulation related suggestions. This implies an assumption however, that some ‘policies’, as suggested by innovativeness indicators, will produce similar results irrespective of context in various countries. This may not be a correct assumption. Indicators used to measure innovativeness overlap causing information redundancy. From the viewpoint of assessing outcomes, or statistical-type assessments, such a variety may not be needed. Thus, it is worth verifying whether or not a more simplistic composite index of innovativeness can be formed, and eventually applied to a broader range of countries.

Despite the aforementioned reservations, some researchers and experts claim that there is an extensive body of literature about indicators of innovativeness, and therefore the search for new concepts is fruitless. Such an opinion is only partly justified. Indeed there are descriptions, but they do not take several important elements into account:

1. How can one consider using some of indicators of innovativeness as undeniable indica-tors of innovativeness, when a precise, unequivocal definition of innovativeness has not yet been agreed upon? Thus, are these indicators really dealing with innovativeness? It may be easier and more justified, to use the approach used in some psychological studies: intelligence is the phenomenon measured by IQ tests. Similarly, innovativeness can be regarded as a construct determined as a result of questionnaire type studies.

2. The question of assessment of the level of innovativeness is still not clearly answered. It is particularly true when we take into account dichotomies between inventiveness and inno-vativeness; definitions that follow Frascati Manual and those that favour suggestions of Oslo Manual; differences between macro and micro approaches to innovativeness: differences be-tween big companies’ and small companies’ perspectives. Paradoxically, despite the fact that these dilemmas are known, when we are approaching the idea of measurement of the level of innovativeness, the consequences of these different perspectives are ignored. The temptation to create one universal index is human nature. In reality, one thing can be said about such an index for certain: it is not known what is measured by such an index.

3. Several indicators of innovativeness that are used, are difficult to be measured and their values are impacted by the context. Therefore, they cannot be used in comparative studies. For example, Community Innovation Survey (CIS) that is used in EIS model, is EU specific.

4. In their actual form, indexes of innovativeness (e.g. EISI), and in a broader context, indexes of standard of living (HDI) and competitiveness (WCY, GCR) show some average value of arbitrarily selected inputs and outputs. After summing up values of inputs and outputs (normally after standardization of their values), an index is provided. Generally, when the level of inputs is higher, a level of outputs is higher. Thus, in countries where expenditures on in-novativeness are higher, the value of innovativeness index is higher. However, it is still only one of the perspectives with which to examine the issue. According to the rankings by leading indexes, wealthy countries are ones where at the very least, living conditions are relatively high. These countries are more competitive and innovative, etc. To a large extent, it is a correct conclusion. It is an important question however, especially for countries that desire joining the league of the wealthy, whether or not resources for innovations are utilized efficiently: that is, whatever the given level of inputs results in the maximum level of outputs. Nasiero-wski and Arcelus report [35] indicates that often, wealthy countries invest more in innovations and probably do it in a very appealing manner, but inefficiently. Quite frequently, countries that are classified as laggards in innovations, do in fact spend less, but do it in efficient ways. Aspects of technical efficiency for turning inputs into outputs should be one of the main research topics in the area of innovativeness because results of such studies may highlight Best Pro-Innovative Policies (BPIP).

5. Additionally, a large number of indicators of innovativeness that “must” be used in the model have the effect that quantitative calculations of efficiency of innovation systems cannot be carried out. Consequently, if one wants to start discussion about technical efficiency of innovation systems, one has to identify a method that would facilitate a reduction of the number of statistical data series. Certainly it may not cause the model developed on the basis of a reduced number of data series, not to have similar information content as currently used models. On the contrary, the new model may have more information content. Additionally, the orientation on meas-urement of technical efficiency will result in processing data according to statistical algorithms (thus reducing bias), and not according to arbitrary decisions based solely on expert knowledge.

6. One of the reasons for the critique of indexes used for the assessment of efficiency rests in the gap between theory, practice and currently available assessments of reality. Theory and practice indicate that innovativeness and entrepreneurship form the roots for economic and social development. The Japanese economy received one of the lowest scores in terms of entrepreneurship [49]. The fact that one of the fastest growing economies in south-east Asia is labelled as non-innovative serves as indirect evidence that the currently used methods for as-sessing levels of innovativeness are questionable.

EUROPEAN INNOVATION SCOREBOARD (EIS):
THE LEADING, BUT DISPUTABLE MODEL

The vital role of innovation in boosting national competitiveness is recognized by most nations. Knowing a nation’s strengths and weaknesses allows a government to institute interventions aimed at fostering and improving their innovation record. Innovative indicators provides evi-dence of the importance for nations to recognize which aspects of their national environment push the firms to evolve, how to measure the success of the value added in innovative sectors, and how to assess intellectual innovation [12, p.6].

One example of efforts to track national innovation ability is the annual European Innovation Scoreboard (heretofore, EIS) and its associated composite index of innovativeness (European Innovation Scoreboard Index – EISI). This scoreboard was “developed by the European Com-mission, under the Lisbon Strategy, to evaluate and compare the innovation performance of its Member States” [12, p. 3]. EIS is essentially confined to EU countries, because it uses indicators (data series) that are based on Community Innovation Surveys (CIS). The EIS concept deals mainly with:
1. Measurement of the level of innovativeness of EU countries and on such a basis, ranking of countries according to the level of innovativeness.
2. Setting indicators of innovativeness, showing governmental policies that can support in-novativeness and indirectly, economic growth.

In its assumptions, EIS should also facilitate to make suggestions that determine elements of economic activity and identify areas of social policies that should be supported in order to in-crease the level of innovativeness. In Europe EIS forms some sort of a framework for the dis-cussion of the issue. EIS solutions/conclusions have many very strong points and it should be appreciated that it is one of the first comprehensive methods to assess NIS, with many intellec-tually stimulating observations and practical guidelines. EIS model forms an index – EIS Index (EISI) – that contains drawbacks. It cannot be used as an instrument to stimulate pro-innovation policies and there are methodological controversies associated with its composition. Among several objections expressed with respect to EISI, the following should be emphasized:

1. The first objection deals with the selection of indicators of innovativeness. For example, EISI uses three patent related data series. These are highly positively correlated. Consequently, the same idea (concept) is used three times within the same index of innovativeness. One should also note that it is disputable whether patents indeed measure the level of innovativeness of SME. Patents are indicators of inventiveness and this is generally the domain of research institutes. Yet another example: one of the indicators used relates to ICT (Information Communication Technology). Thus, Poland for example, is one of the leaders in ICT. Such a position does not result from the ease of access or density of access to ICT, but from exceptionally high costs of ICT in Poland; therefore, this indicator is simply misleading.

2. The second objection originates from the fact that despite the declaration that EISI measures innovations, it actually concentrates on inventiveness.

3. Furthermore, it is difficult to use EISI as an instrument that impacts the crafting of pro-innovative policies, because the set of used indicators is being changed from one year to the next. As a consequence, it is impossible to empirically prove that an increase in the level of any one indicator is indeed due to the efficiency of policies utilized. Longitudinal studies cannot be carried out. It is also not known which indicators are the most important because they all receive the same weight of importance.

4. Additionally, keeping in mind that innovativeness evolves within NIS, indicators used in EISI do not allow us to determine which legal/institutional policies are correct. There is no room in EISI to assess the quality of rules, policies or stimulators; the strength of barriers; institutions and their interactions with business and/or their fit to local conditions. These elements may in fact be critical to the efficiency of innovative systems.

5. An important question, which remains unanswered, or a problem that has not been clarified, relates to the following: why do all indicators of innovativeness used by EISI have the same weight of importance? This implies that the process of identifying the inputs and outputs required in the computation of the index of innovativeness includes the implicit assumption of all countries:
(i) being equally efficient in the transformation of their inputs into outputs
(ii) having the same context of operations.
These assumptions are certainly incorrect. In practice, it is more probable that an entirely dif-ferent situation occurs: two different countries that use different compositions, or levels of inputs, will gain similar results. Concurrently, the use of the same inputs at the same level and keeping in mind contextual elements, countries will achieve different outputs. This quandary certainly calls for a more detailed examination, yet at its current stage EIS underlying assumptions are, at minimum, incomplete.

6. The EIS concept assumes that NIS policies within the European Union should be uniform (similar). Improvement of innovative activities is impacted by context and should not only be an average determined on the basis of an average within the European Union. As well, importance of specific means to improve innovativeness is different among countries: for example Poland should invest in ICT (though it is ranked high according to this indicator), to a larger extend than Austria (because it has a stock of accumulated ICT capability).

As it is evident from the above-presented comments, EISI that is constrained a solution in the measurement of innovativeness, has several limitations. In fact, it can hardly be practically used as an instrument to craft pro-innovative policies and is associated with methodological shortcom-ings. Therefore, there is a need to identify solutions that will be useful, not only for the purpose of ranking countries from the innovativeness viewpoint (that may be useful from statistical-publication type of perspective), but also that may have practical applications. Such methods should facilitate identification of indicators of innovativeness, that in turn will allow the creation of a composite index or indexes that will satisfy the following criteria:
1. Be based on credible and easily available, statistical data series.
2. Be consistent with contemporary definitions of innovativeness and reflect themes rele-vant to NIS. It should address aspects of inventiveness as well as innovativeness, and be adequate both for big and small companies.
3. Facilitate the examination of the technical efficiency of systems that will support inno-vativeness according to the description of the index.
4. Be an instrument useful to formulate policies/solutions oriented on the enhancement of innovativeness, and an index that facilitates the control of efficiency for such policies.

SUGGESTIONS FOR FURTHER AREAS TO INVESTIGATE INNOVATION SYSTEMS

Linking EIS Index to NIS (“Operationalization” of NIS, the search for Best Pro-Innovation Policies (BPIP), and selection of indicators of innovativeness).

The list of the key topics used to discuss NIS can be used as a starting point to examine the link between NIS and EISI. Once the list of such topics is agreed upon, one can isolate key motives within each topic in an attempt to identify:
1. Typical solutions / policies pertaining to the specific subject in an attempt to identify the Best Pro-Innovation Policies (BPIP); and
2. Identify indicators (data series) that pertain to the topic (indicators – that represent the phenomenon).

There is a need to find adequate indicators of levels of innovativeness and means to enhance it. It is an important theoretical and practical topic. The discussion about the enhancement of innovativeness is a current one, but has not resulted in practically useful conclusions. Conse-quently its continuation is warranted because consequences that can be used in practice are very important. They can indicate which activities and which specifics of economic growth policies will facilitate and increase the effects, keeping in mind available resources. Results of such studies can pinpoint solutions that brought positive results in other countries (BPIP), and can, following modifications, be transferred to other contexts of operations.

While many indicators of innovativeness are used, it cannot be taken for granted which ones should form a composite index of innovativeness that will comply with the criteria discussed earlier in part 4 of the paper. Keeping in mind the importance of the issue to social and economic development, the topic should not be neglected. Thus, it can be suggested that the link between “innovativeness leading themes” to indicators of innovativeness is created. These leading themes, in fact results of the assessment of the components of NIS, can be ‘somewhat’ quantified. Even if this is not a measurement, and solely an ‘expert’s’ assessment of the quality of solutions, this will facilitate a more quantitative handling of the associated questions. An initial attempt to use such an approach is visible in EU Trend Chart reports [3]. Note, the criteria used in the assessments have not been disclosed.

The selection of indicators of innovativeness can be done on the basis of literature review, whereas the initial and the leading motive can be the set of indicators used in EIS. There can be two implications of such a line of reasoning:
1. Indicators that will be used to create an index will have the similar information content as EIS, but will be based on more easily available indicators. The adequacy of the result can be verified with the use of Spearman Test.
2. Setting indexes of innovativeness can be done using two perspectives on innovativeness:
(i) big enterprises perspective, based on Frascati Manual, that assumes macro-economic perspectives;
(ii) SME view, which covers improvements at the operational level, and is rooted in concepts described in Oslo Manual.

There are strong correlations between items in EIS index, thus creating redundancies of infor-mation within its contents. Many elements are counted two or more times. It may be that the EIS index can be reflected by a simpler set of indicators yielding similar results of ranking the innovativeness of countries). Therefore:
- is there a need to use so many indicators as in EIS (especially those that use data series specific to the EU alone) in order to measure innovativeness?
- how can a new (modified) index of innovativeness be constructed? – can an analysis of the data suggest an alternate categorization of the indicators, instead of the five pre-determined scales used by the EIS [12].

A “compression” indicators used in EISI produces a trade-off between an index as a policy making driver versus index as platforms for ranking purposes. While keeping in mind the number of “NIS themes” and the number of corresponding sub-topics, one may expect that the number of suggested indicators for innovativeness measurements will be extensive. In order to reduce the number of indicators (data series) and form a New Innovation Index (NII) a principal-components analysis can be performed on the 26 indicators used by EIS [12]. A similar approach is recommended by Saisana & Tarantola [41, p.12]. The following four-step procedure based upon Hair, et al. [24] can be used, and has already been tested [47]:
1. Scan the anti-image correlation matrix and eliminate from the analysis those indicators with a sample adequacy below 0.7.
2. Repeat the analysis until the Kaiser-Meyer-Olin indicator of sample adequacy reaches 0.8. If the value of such an indicator cannot reach that value, stop, since no statistically significant principal-components analysis can be undertaken.
3. Extract all factors that explain at least 10% of the variation and/or their eigenvalue is at least 1.
and
4. Interpret the resulting factors on the basis of the indicators with a factor loading of at least 0.5.

Once the number of factors has been established, the next step is to compute another measure of the EIS, on the basis of the factor loadings obtained through the principal component analysis. The issue of contention is the weight to assign each factor in computing the overall score for each country. EIS [12] gives equal weight to each factor, thereby having the average of the factor loadings as the overall country measure from which to obtain the ranks. One of the problems with this approach is the lack of evidence that each item is equally important in the rankings. A possible solution to this problem is to weigh each factor by the percentage of total variance explained by the said factor. In this paper, we rank the countries according to both criteria and use the Spearman correlation coefficient to test for the difference between the ranks. The evidence indicates that, with the p-values of the tests less than 10-4, the ranks are not sufficiently affected by the ranking criterion utilized. Hence, both methods, i.e. with the inputs and outputs together and separately, yield similar rankings.

Technical efficiency on innovation systems: DEA results as a measure of quality of innovation systems

It is reasonably clear from the preceding discussion that the methodological base given in EIS [12] is not free from controversy. The important observations dealing with the equal weights of inputs and outputs, raises questions of comparability of the efficiency in innovation systems. This leads to the important issue in this section, which is how to assess the technical efficiency (EFF) of countries in the process of transforming inputs into outputs. From a micro-economic perspective, such issue epitomizes the concept of Paretto-Koopmans efficiency [45], related to the ability of a country to minimize the number of inputs required to produce the maximum possible set of outputs. Hence a country “is fully efficient if and only if it is not possible to improve any input or output without worsening some other input or output.” [8, p.45].

To model such a concept one can use the non-parametric technique of Data Envelopment Analysis (DEA) [15] [16], and in particular [7], [8]), where the countries that fulfill the efficiency definition form a benchmarking frontier against which all others are evaluated. The inputs and outputs of the model are the three factors obtained from each of the two principal component analyses on the EIS 26 indicators described in the previous section. For the EFF formulations, the paper uses an input orientation, consistent with the belief that countries are in a better position to control the inputs than the outputs. Within this context, EFF relates to the minimization of the resource endowment needed to produce a set of outputs. Benchmark countries have an EFF of 1. For the others, EFF is measured in terms of how far the other countries must reduce their input consumption to reach the levels of their efficient counterparts.

Two crucial characteristics of a country’s production process may have an important impact on the efficiency computations. These are, returns to scale (RS) and congestion (CON), two key concepts of production economics (e.g., [7], [8], [47]).

RS deals with the rate of change in the inputs utilized, as compared with the rate of change in the outputs obtained. Constant RS (CRS) occurs when the rate of changes in the inputs equals that of the output. Alternatively, if rates differ from each other, there is evidence of variable returns to scale (VRS). Another RS index used below is that associated with the non-increasing returns to scale (NRS). Generally, it may be assumed that other than optimal RS indicates a lack of economies of scale, and thus, probably also limited synergies stemming from country size.

The second characteristic, congestion, deals with the cost of disposing of unwanted inputs. The inefficiency arises from the fact that the presence of congestion requires the use of resources for the elimination of the undesirable inputs that would otherwise have gone to generate more outputs. “Evidence of congestion is present when reductions in one or more inputs can be associated with increases in one or more outputs – or, proceeding in reverse, when increases in one or more inputs can be associated with decreases in one or more outputs – without worsening any other input or output.” [8]. Examples of input congestion appear in Coelli, et al [7] among many others, in cases of government or union-based controls on the use of certain inputs. The literature employs the terms weak (WD) and strong (SD) disposability to denote whether evidence of congestion exists or not. It may indicate that inadequate specialization patterns are used in pro-innovative activities, if congestion is different than “1”.

SUMMARY

This report has largely been based on European Innovation Scoreboard 2005 [12], [42]. One of the main trusts of the EIS approach is that indicators included to create an innovativeness index, can be impacted by means or regulations and national innovation policies. The goal of subsequent EIS Methodology reports is “to further explore different dimensions of innovation and to identify areas that are not covered in the EIS” [4]. This trend may also be rooted in the recommendations of Aho Report [1] that suggests “a 4-pronged strategy focusing on the creation of innovation friendly markets, on strengthening R&D resources, on increasing structural mobility as well as fostering a culture which celebrates innovation”. In such a manner, there will be somewhat of a departure from an emphasis on innovation inputs (such as R&D expenditures), and innovation outputs (such as patents), more in the direction of capturing aspects related to creation of demand for innovation, and socio-cultural, education, entrepreneurial, and flexible stimulators for implementation of innovations. Even though, the current “EIS provides considerably better coverage of innovation as a creative activity than innovation as a process of diffusing new technologies and knowledge” [4, p.3], it still exhibits several disadvantages.

Results derived from the EIS [3], [4], reports present only a partial picture of the innovativeness of countries. This data focuses on a country’s ranking, and whether they are leading countries, exhibit average performance, are catching up, or are losing ground. None of these measures considers specific economic and social conditions of the country. This paper has attempted to offer insight into whether available funds are spent efficiently by countries, rather than focusing solely on the aforementioned classifications and rankings. The use of non-parametric techniques can essentially reformulate well established opinions and accepted levels of understanding the problems of innovativeness.
Each composite index captures some information related to economic improvement. Since items in these indexes are strongly correlated, it should be asked which item acts as a stimuli for the development of the other. Recommended innovation policies should not be considered as “an average” of responses from different sectors, by companies of different size, which operate within very different economic, political, and social contexts, thus making one of objectives of the EIS approach – “that indicators have policy implications” – difficult to endorse. It is important that aspects of efficiency are addressed in a way, that permits a better modeling of EU policies, to the sectorial and regional specificity [46].
It is at times expected that composite indexes may serve as a guide for policy settings. However, data series used in composite indexes change almost every year. Such a practice limits the possibility of identifying whether policy changes have contributed to the improvement of desired operational outcomes, and longitudinal type studies are limited .

Yet another topic worth exploration deals with elimination of the impact of contextual variables (market factors, culture, accumulated stock of experience, macro-economic structure of the country, developed business links, etc.) upon efficiency of innovation systems. If the impact of contextual elements upon the level of innovativeness is isolated, a composite index could serve as a starting point to examine the effectiveness of programs oriented on support of innovativeness (i.e., to which extent policies related to innovativeness indeed contribute to social and economic objectives).
Acknowledgements
This research was partly supported by grants from the University of New Brunswick, and the Faculty of Business Administration at the University of New Brunswick. This support is noted and appreciated.