fraassen.doc 96/11/13

Can Economics Mean Something to the Semantic Programme?

by Bert hamminga

Contents: PatricSuppes, statementsandsemanticsSneed and Van Fraassen on empirical parts of a theoryGiere and Van Fraassen on two fixed point ismsGiere, Van Fraassen and Nowak on approximation Economics: the last Emperor The Semantics of Keynes Van Fraassen, Giere, lines, light rays and consumption Theory and reality: the data matrix 1. Conclusion: the data matrix 2. The sense of the semantic programme Appendix 1: Empirical models Appendix 2: Notation Literature

1.Patric Suppes, statements and semantics

Like many people can make an abstract painting à la Mondriaan, but only after having seen some, to many people the Suppes representation of theories (Suppes 1957 1960 1962) looks, after having seen some, trivial. Instead of representing a theory by "p® q" or "" x(Px® Qx)" or "" x(xÎ P® xÎ Q)", we put all interesting things (P and Q information) to the right of "is an" in the expression "x is an S". An S! So a theory tells us what is the set of S’es, by specifying sets of objects involved and the restrictions on the functions between these sets and finally the claims that x is one of them. I was born when Suppes started to mould his method of representation, and I was a student when I experienced the excitement in the early 70’s when Jo Sneed (1971) finally succeeded to convince the philosophy of science forum that the Suppes tool makes us see and realise things that we would not so easily see if we would stick to "p® q" or "" x(Px® Qx)" or "" x(xÎ P® xÎ Q)".

Sneed was consistently reluctant to come up with a new philosophy of science. He merely presented a highly useful tool. But he immediately found a strong ally in Wolfgang Stegmüller (1973), who, from his Carnapian viewpoint, sought by this approach to deepen his criticism of Popperian critical rationalism, and hence was eager, contrary to Sneed, to make a philosophy out of it. Stegmüller did not call it the semantic, but the non-statement view, but he surely meant to harvest the same kind of ammunition against philosophical opponents as Van Fraassen, though the latters’ opponents are different.

 

In later stages of the spreading of the Suppes tool over the philosophy of science gradually a pretty wide consensus arose: what you can frame in "x is an S", can also be framed in "p® q" or "" x(Px® Qx)" or "" x(xÎ P® xÎ Q)". [references here] It only often yields highly cumbersome formulas, and hence, until someone solves that quite unphilosophical problem, is often not recommended for scientific communication. Thus, it was felt, nothing philosophical can be inferred from the handsome and highly useful Suppes tool. For some this led to having it as a standard item in their rucksacks while wandering around in science, for others to return to return home, light the fire and contemplate and debate the essential nature of science.

2. Sneed and Van Fraassen on empirical parts of a theory

In the discussion of the constructive empiricists and realists, at least Van Fraassen derives from the non statement view, here called semantic view, a philosophical message. The semantic view is taken to imply that truth does not mark "the relation to the world, which science pursues in its theories" (1987:111), and it yields the concepts in terms of which he defines empirical adequacy:

The actual and observable parts ("empirical structures") of the theory should all be embeddable in some single model of the world allowed by the theory. (Van Fraassen 1987:112)

The philosophical message is that empirical adequacy does, and truth does not mark "the relation to the world......etc." .

Sneed also had the problem of distinguishing theoretical (in his terms potential) and empirical (partial potential) models. His empirical models are those parts of the theoretical models that do not contain theoretical terms. While Van Fraassen needs some criterion of observability (psychological?), Sneeds made his life considerably easier by coining a definition of theoreticity: a term (or variable) of the theory is theoretical if its value in applications can only be calculated if you know (and use) the theory (Sneed 1971, 31-2). "Mass" is theoretical in Newtonian mechanics, because if you are not allowed to use Newtonian mechanics, you cannot calculate "masses" in an application, but you can measure time and distance independent of Newtonian mechanics! Hence, "mass" is a theoretical, ergo not an empirical term with respect to Newtonian mechanics in Sneeds sense, and "time" and "distance" are empirical with respect to Newtonian mechanics. Quite easy and unambiguous, but as a result a term may be theoretical with respect to one theory and at the same time empirical with respect to another. If you are not scary about different and changing points of view in science, you have no problem. But on my journey in the literature on constructive realism/empiricism, I discovered that what marks the philosopher there is the search for a super-historic fixed point. He considers it to be his job to find such a fixed point, preferably a single one excluding others. Every philosophical position marks an ism in the debate concerning where to find this single point, and how to nail scientific theories to it. As far as fixed points are concerned, Van Fraassen is heading for actual and observable (Van Fraassen 1987:112). This actuality and observability can not be theory relative, as this would make his nail connecting theories to.........., yes to what?......come loose like Sneed’s (who does not care).

3. Giere and Van Fraassen on two fixed point isms

Giere commented on Van Fraassen’s position in "Constructive Realism" (Giere (1985)), a highly perspicuous and brilliant article. "One cannot eliminate all questions about language or interpretation........ Some additional semantic categories....are needed to distinguish masses from numbers -and thus mechanics from pure mathematics." (P. 77, italics his). He refers to the relation between, for example, the following two Suppes expressions:

x is a line

x is the path of a light ray

Giere is not willing to say that paths of light rays are lines, or that lines are sometimes, and sometimes not paths of light rays (as Van Fraassen can be observed to do (for example: 1987:108). According to Giere, the claim of the relevant physical theory is that light rays can (or cannot) be so interpreted into mathematics. Later, indeed Van Fraassen glides into formulations like "...the real world itself is (or is isomorphic to) one of these models" (Van Fraassen 1987, p.111), where it is not completely clear whether Van Fraassens’ "isomorphism" stands for approximation, interpretation, or both. Giere claims that additional semantical categories ("such as meaning or reference" (1985, p.77)) are needed for the distinction between light rays and lines. We shall be able to shed more light on this issue after having analysed the economics example in section 5.

Giere also rightly does not envy Van Fraassen in the task the latter set to himself to find a fixed point method for picking out empirical substructures (Giere (1985, p.81), but he nowhere deals with the problem of combining his own laudable abstinence in this respect with a fixed point ism like his constructive realism. Is not it the same as Van Fraassen’s problem?

 

4. Giere, Van Fraassen and Nowak on approximation

Gieres third and highly instructive point (Giere (1985), p.79-82, reply by Van Fraassen (1985), p. 289-91) concerns embeddability as defined by Van Fraassen. Consider also the van Fraassen-claim that

There is a single model of our planetary system allowed by Newtonian mechanics in which all observable parts ("empirical structures") of Newtonian mechanics are embeddable.

If it is decided that Newtonian mechanics can be endorsed (as many people between 1670 and well after 1911 did), this is never done because measurements of nontheoretical terms yielded a perfect fit with the Newton functions. My comment on this point by Giere is that he is empirically right and one can think of many reasons why people yet endorsed the claim. The two main ones obviously coming to the mind are: 1) They thought all deviations were caused by scientifically uninteresting measuring errors, 2) They thought there were some unknown minor forces, to be taken care of in a later stage. If some former believers in Newton are still alive, we could try to test our prejudices by asking the question. But I doubt whether it is fruitful to proceed by searching for a rational foundation of our own possibly empirically erroneous ideas about it, as Giere is doing: he bravely abstains from any distinction between theoretical and empirical parts of models and writes about models sec "we must avoid claims that any real system is exactly captured by some model" (1985, p. 79). "I propose we take theoretical hypotheses to have the following general form [these were my italics; to which court is he appealing? what is the courts’ status in science?]:

The designated real system is similar to the proposed model in specified respects and to specified degrees." (Giere 1985, 80, his italics)

He calls this a "more modest, constructive realism" (p. 80). Van Fraassen answers:

"To say that a proposition is approximately true is to say that some other proposition, related in a certain way to the first, is true" (Van Fraassen 1985, p. 289),

implying that Giere’s attempt to modesty is has failed.

Now this subject has been clarified considerably, within the framework of the statement view, or what Van Fraassen would probably call the syntactic view, in the late seventies by the Polish philosopher of science Leszek Nowak (he has synthesised his work in Nowak 1985). His main idea is that all laws of science are ideal laws. Every scientist knows that they never hold in the real world. Scientists believe that their laws would hold in a world where a certain set of ideal conditions is met (no friction, perfect vacuum, perfectly competitive markets where everybody is perfectly informed, etc.). But they know that these ideal conditions are never met. NEVER! Hence all laws p® q are trivially false and all idealisation statements:

ideal conditions ® (p® q)

, due to the trivial falsity of the conditions left of the left implication sign, are trivially true. Van Fraassen holds that "truth does not mark the relation to the world, which science pursues in its theories", but there should be truth with respect to the empirical parts. Giere drops this idea of a perfect fit to observed data, and Nowak treats the falsity not as a lamentable matter of observation practise, but as a principle of science: science abstracts, willingly and knowingly. But we can see clearly see here that Nowak’s claim is independent of the semantic view, which is in line with consensus of the seventies that framing in "p® q" or in "x is an S" is a matter of communicative convenience only. According to Nowak, the game of science is to remove as many ideal conditions as possible, by appropriately modifying, "concretising", the ideal law. The process yields, in principle, an endless increase in accuracy. There often is, for some law, a fair consensus among experts on the list of ideal conditions to remove, and on the priority in this list ("significance structure"). Since one law often has many different real world applications, concretisation runs like a river that ends in a delta (tree structure of concretisation). "Downstream" is more accuracy, that is, more Giere-similarity in "respects" and "degrees", a better Van Fraassen-approximation. The merit of Leszek Nowak is not only his simple and perspicuous empirical hypothesis about science, but its corroboration in extensive empirical research by many able empirical philosophers of science. We know now many beautiful and illustrative cases of concretisation in different fields of science.

Now we have harvested several related terms like similarity (Giere), approximation (Nowak, Van Fraassen), isomorphism (Van Fraassen). In honour of Gieres paper, let us take similarity as the term to proceed with. Consider:

The Einstein-similarity of a Newton model of our planetary system is.............

Such reports were actually given by Einstein himself and others in the early twenties.[references here] Though they did not bother to define a similarity concept in a logically rigorous way, they did aim at concise and perspicuous reports on how variable values and functions in a Newton model of our planetary system deviated from those in an Einstein model. The purpose was useful and clear: the precision of the two theories could thus be compared by measurements. More rigour in logical reconstruction is given to this kind of similarity concept in the philosophical literature on reduction of theories [references here], a field of the philosophy of science which still making considerable empirical progress. Now consider:

The Giere reality-similarity in respect R of a Newton model of our planetary system is of degree D.

Giere wants a theory independent fixed point specification of at least D. I doubt whether this is a standard adopted or recommendable for science in general, but for instance in doing the standard statistical Neyman Pearson tests (Neyman and Pearson 1967, for a critical analysis Keuzenkamp 1994) in some circles the allowed chance of accepting a false hypothesis is specified as 5%, irrespective of the nature of the hypothesis involved. So this kind of fixed point has some popularity.

Now consider Van Fraassens’ approximation as "some other proposition, related in a certain way to the first, is true"

There is a single model of our planetary system differing in way W from a single model allowed by Newtonian mechanics in which all observable parts ("empirical structures") of Newtonian mechanics are embeddable.

The question remains whence (if not from the most recent theory) Van Fraassen obtains his fixed point W. I do not directly see a place in science to test, illustrate or study how scientists deal with this W. The argument that elegantly proves Gieres attempt to modesty in realism to be in vain threatens to invoke reference to Perfect Knowledge, and to be vacuous in Its absence. A bad place for a nail.

Is Van Fraassen really so keen on finding a super-historic fixed point to hang science on his constructive empirical nail? In 1985, p. 289 he describes a deeper, far more sophisticated position:

"Both sides may nevertheless indulge a secret glee. I want to say "See, the theory of science can be developed without the realists’ metaphysical baggage!" Giere wants to say, "See, the theory of science can be developed without the constraint of the empiricists’ epistemological straightjacket.""

Van Fraassen is not so keen after all to find a fixed point, but more eager to apply Occams Razor to (constructive) realism in the theory of science. Van Fraassen suspects that Giere is eager to do the same to (constructive) empiricism. This means that Van Fraassen will not rather die than let Giere have his way, provided he himself will have his. The combat thus can be expected to solve itself after the mutual (not very bloody) cutting of some unnecessary limbs, but it might end in an surprising way now we realise that, as a result of the "Perfect Knowledge" reference of Van Fraassens’ W, what is behind the two "without"s in the quotation above should largely be interchanged.

The remaining question is: how will the "semantic view"-programme in the theory of science be defined once Occams Razor is effectively applied by both sides? What about interpretation and approximation in the semantic view? Are there good reasons to keep searching for fixed points to nail theories?

Thus, I gathered some questions that I would like to deal with by appealing to a nice, classical, relatively simple and typical example taken from economics.

5. Economics: the last Emperor

As a philosopher of economics, I had to learn some physics, for the following reason: characteristically, if I report on findings concerning theory structure and theory development in economics, the philosophers’ reaction is that this is no relevant material, because economics is not a "science", or at least a very "bad" one. To come up with something interesting, I should study "good" sciences like physics. So my strategy is this: when I find something in economics, I try to find the same in physics. On this I report first. Afterwards, I have some chance that also my observations in economics are swallowed by some philosophers. Let us now study one central part of a theory that tells the government how to ensure that physical scientists and even philosophers of science have jobs. The theory is usually called the multiplier theory of investment. Its author is John Maynard Keynes, the Last Emperor of economics, whom nobody even dared to offer a Nobel prize.

Consumption (C) is defined as all spending by private households. In the national account of a country, consumption is by definition a part of sales by companies. The rest of the sales by companies is not to private households, but to other companies. The sum total of sales by companies (net of company spending on goods, and depreciation) is often called the Net Investment (I) of the nation. C+I is total net sales, and it equals by definition total spending (if some firm sells, some other firm or a consumer spends). This aggregate is called the national income Y. It equals by definition the sum of all wages, rents on real estate, interests paid, and profits (both the part paid out to stockholders and the part kept in the companies). Now we are going to study a macroeconomic theory designed to calculate a theoretical term, the "investment-multiplier" which measures the impact of an increase of investment on the national income (and hence on employment). The Suppes ritual later will need something right of "is a", and I will say "is a Keynesian Multiplier System (KMS)" later. This is the theory:

C = m . Y + a

The theory has one relation, a linear relationship between Y and C. The idea is that if aggregate income goes up, we will consume more, and vice versa, and that this should be a fairly stable relation. If an addition billion D Y of Y yields a D C of L800 000 000, then m is measured to be .8. This was the apriori estimation of Keynes. Quite appropriately, he called this theoretical term the marginal propensity to consume. The theoretical term a (autonomous consumption) appeared only later in the literature. Keynes talks as much as possible about increments only in the context of KMS. But theoretical term a is implicit. We come back to this in concluding section 9. You can only calculate m and a if you use the theory, whereas annual consumption C and annual national income Y can be found in every office dealing with economic statistics. They are empirical with respect to KMS.

Now Van Fraassen and Giere would expect economists to evaluate the empirical claim, of the theory, in terms of empirical adequacy or similarity, respectively. This is not what is happening: economists already know for sure they have something pretty adequate: does not introspection tell you that it would be very strange to find a country where m would be negative (people there would on the average cut spending in reaction to an income increase), or far greater then one (on receiving an additional D Y, people there would start to spend additionally much more that what they additionally received)? So, Keynes is more interested in the logical consequences of the theory. What effect would additional investment increase D I have on income Y?

C = mY+a

Y-I = mY+a

Y = 1/(1-m) * (I+a)

D Y/D I = 1/(1-m) : = k

Multiplier k is thus defined as a transformation of the theoretical term m. "It tells us that, when there is an increase of aggregate investment, income will increase by an amount which is k times the increment of investment" (Keynes 1936,115)...."If the consumption psychology of the community is such that they will consume, e.g., nine-tenths of an increment of income, then the multiplier k is 10; and the total employment caused by (e.g.) increased public works will be ten times the primary employment provided by the public works themselves, assuming no reduction of investment in other directions" (Keynes 1936,117).... "Mr. Kahn has examined the probable quantitative result of such factors ...But, clearly, it is not possible to carry any generalisation very far. One can only say, for example, that a typical modern community would probably tend to consume not much less than 80 per cent of any increment of real income, if it were a closed system with the consumption of the unemployed paid for by transfers from the consumption of other consumers, so that the multiplier after allowing for offsets would not be much less than 5. (Keynes 1936, 121). On p.129, he arrives at 2.5 on the basis of figures by Kuznets, a result he calls quite plausible for a boom, but improbably low for a slump.

We can learn that attention does not concentrate on the validity of the theory but on the quantitative impulse of investment, e.g. by the state, on the national income and hence on the dangerously low employment England was facing at the time of the book’s appearance. The theory is argued for a priori. Who can possibly doubt a positive influence of income on consumption? The real world, Keynes has no doubts, simply must be near (in what sense near we shall discuss later) the class of possible models defined by what Giere calls the theoretical hypothesis

0<m<1

Only the value of m matters, but not even so much, because whatever its value, a government always effectively primes the pump of the economy by stimulating investment. The pump certainly works properly, only its quasi mechanical efficiency is subject of some uncertainty.

How is this empirical knowledge achieved? By some good thinking, given some a priori available information. How is it certified? By the very same process! The proof of the pudding is not in the eating here, but in the making.

In economics, key theories are never rejected for realist or empiricist reasons. Some economists like to test theories: there is a literature on transitivity of preferences (if a consumer prefers A to B, and B to C, will he prefer A to C?). The anomalies found could not convince economists to drop the assumption. There is a literature on "money illusion" (if prices and income rise with the same percentage, so nothing changes in the consumption possibilities except the numbers representing the prices and the incomes, will people keep the composition of their shopping baskets unaltered?), but though the existence of money illusion could definitely be reported, economists do rightly not believe they should change their theories accordingly. There is a literature on capital-rich countries exporting labour intensive products (Leontief paradox), but the relevant basic theory claiming this is impossible is still alive. The lip service paid by economists to falsificationism, realism, empiricism, and what have you is rightly referred to by Weintraub as methodological folklore (Weintraub 1992:[page]). It is an unnecessary hypocricism, possibly resulting from the bad influence of the philosophy of science.

6. The Semantics of Keynes

There is quite some literature containing highly instructive semantic representations (also referred to as structuralist reconstructions) of economic theories. A Keynesian theory has been analysed by Maarten Jansen 1989. His reconstruction represents a modified version generally adopted for the exposition of Keynesian thought in later periods. My present attempt is to stay as close as I can to the original exposition by Keynes (1936). However, Keynes defines "investment" in at least two ways: he starts with an extremely problematic definition in terms of "user cost", quite an obscure term which he was , during the preparation of his book, strongly advised not to employ by able colleagues like Townshend, and quickly disappeared from the discussions by the back door. The second way Keynes defines investment is in terms of increase of the stock of capital in the country. I will stick to the second definition.

What do we need in Keynesian Multiplier Systems KMS? (For your convenience, all formulas are collected in appendix 1., the notation is explained in appendix 2)

First, this is what a domain D should look like: there should be two types of agents: a set of firms fi Î FI, and humans hu Î HU. There should be time time ti Î TI measuring periods, years from January 1st to December 31st for which we account flow variables. Should the domain really exist? No, that requirement comes later. For the time being, we think of all possible domains.

What should our range R look like? All variables are measured in one type of numbers: decimal numbers D . The economy, as you know because you are part of it, uses only decimal numbers, many economic theories need more, but not this one.

What observation relations are involved?

Firms fi pay rent. This is measured by a function

rent: FI× HU× TI ® D

thus rent(fi,hu,ti) is the rent paid by fi to hu during ti.

Firms fi pay wages. This is measured by a function

wage: FI× HU× TI ® D

thus wage(fi,hu,ti) is the wage paid by fi to hu during ti.

Firms fi pay interest. This is measured by a function

interest: FI× HU× TI ® D

thus interest(fi,hu,ti) is the interest paid by fi to hu during ti

 

It must be observable how much is sold by firm fi to human hu for consumption, this is a consumption function

cons: FI× HU× TI ® D

thus cons(fi,hu,ti) is the consumption during year ti by human hu as far as bought from firm fi.

Firms buy also from each other (intra industry sales) , so this must be observable via a function:

intrasales: FI× FI× TI ® D

The first FI refers to the set of firms as buyers, the second FI to the same set of firms as sellers. Thus, intrasales(fi’,fi) = 3 means: during year ti, firm fi’ bought 3 year wages of products from other firm fi.

Finally, all these variables are supposed to be measurable for some period, a year. The variables would obviously have around one-twelfth of their value if we would measure in months. During this year, machines, houses and other existing fixed capital looses value as a result of depreciation. In order to avoid "items of fixed capital" in our reconstruction, consumers immediately write off all they buy, and only firms have depreciating fixed capital goods. There is an annual deprecation figure (a decimal number) for every firm fi, so an observation function

depreciation: FI× TI ® D

thus depreciation(fi,ti) = 3 means that during year ti, the fixed capital items of firm fi lost 3 year wages of value by depreciation.

In every firms’ bookkeeping, the costs of the items in the stocks that are or are not being sold are calculated by bringing together the costs of labour (wages), real estate used (rent), money borrowed (interest), and goods bought from all firms fi’ and used for making every quantity of the item that was sold for cons(fi,hu,ti) and for intrasales(fi,fi’,ti). So the bookkeepers of every firm fi dispose of a function doing this

cost: cons(fi× HU× TI) ® D

cost: intrasales(fi× FI× TI) ® D

thus, cost(cons(fi,hu,ti)) is the cost of sales during year ti by firm fi to human hu, and cost(intrasales(fi’,fi,ti)) is the cost of sales during year ti by firm fi to firm fi’. Because all firms have such a function, we may merge the domain sets cons(fi× HU× TI) for all fi into one domain set cons(FI× HU× TI), and the same leads us from intrasales(fi× FI× TI) to intrasales(FI× FI× TI). Thus the domains of our cost function can be blown up to:

cost: cons(FI× HU× TI) ® D

cost: intrasales(FI× FI× TI) ® D

 

This ends the list of what some thing x should be equipped with in order to be a partial potential, or empirical model of the theory, that is, in order to be a CKMS (Candidate Keynesian Multiplier System). A CKMS should be a kind of thing of which one can meaningfully ask: is Keynes right here? (So that is why we now should be happy to note that, e.g., our planetary system is not a CKMS). Did we succeed in focusing on the right kind of things? We can think of some additional requirements for the candidate: The functions should all be defined for their complete domains (to every element in the set left of the arrow should correspond an element in the set right of the arrow). Also, we obviously could not make much of, e.g., negative sales. In general, not every instance of every variable and every function allowed thusfar seems to have an understandable economic interpretation at first glance, and this is what is required for our candidates. But since these problems are usually considered to be trivialities and thus are not dealt with explicitly in the expositions of the theory, let us do the same (even though many great insights in economics originated from the discovery that what was thought to be trivial turned out to be highly problematic).

So CKMS is characterised by a domain D and a set OBS of observation relations, which are all functions.

On CKMS’s, other concepts are defined, for analytical convenience, but also because these concepts are known and used by economists and economic actors (hence, these concepts are used in many more contexts than only the Keynesian multiplier theory of investment.). "Profit(fi,ti) is such a defined concept, and it should have a highly specified relation to the rest of the variables: it is, by definition, the closing entry balancing the budgets of all firms, and of the country: what is left over of sales revenue for the humans hu el HU who own the firms fi el FI after they paid the firms’ wages, rents, interests and materials bought from other firms. To understand the definition of profit(fi,ti) the profit of a firm in CKMS’s, we must first define investment(fi,ti), the investment of a firm during year ti as:

investment(fi,ti) =

wage(fi,ti) + rent(fi,ti) +interest(fi,ti) + S intrasales(fi, FI,ti) minus

cost(S cons(fi,HU,ti))+ cost(S intrasales(FI,fi,ti))+depreciation(fi,ti)

Yes, this definition requires elaborate explanation. The wages, rents and interests paid are investments in the firm. The products the firm creates are produced by the services or labour, real estate and capital lent to firm fi. One of the bookkeepers’ tasks is to appropriately attribute this money as costs of the items produced. This money is thus considered by the firm, its owners, the tax authority, and others involved, to be invested in the items of produced. The same holds for S intrasales(fi,FI,ti), the sum total of the value of all purchases during year ti by our firm fi from all firms fi’ Î FI. E.g. steel bought for cars, whether lying as plates, are already reshaped in the form of a finished car, is an investment of firm fi. Selling of the car by our firm fi to a consumer constitutes a desinvestment. Hence, this should be substracted ("minus"). This means, adding all, that cost(S cons(fi,HU,ti)), that is, the total cost of what is sold during the year ti to hu Î HU, is leaving the firm as a desinvestment. The same holds if not a consumer, but another firm buys a car, so, analoguously, cost(S intrasales(FI,fi,ti)) is a desinvestment. Meanwhile, firm fi’s machines and building deteriorate, leading to an estimated loss of value measured by the function depreciation(fi,ti), also a desinvestment.

Now we understand investment, we can finally define profit, for every firm fi as what is left for the humans hu Î HU who own firm fi after they pay their wages, rents and interests, and other firm’s bills due for the relevant year.

profit(fi,ti) =

S cons(fi,HU,ti) + S intrasales(FI,fi,ti) + investment(fi,ti) minus

wage(fi,ti) + rent(fi,ti) +interest(fi,ti) + S intrasales(fi,FI,ti)

Firm fi recieved in total, during the year ti, S cons(fi,HU,ti) from all consumers hu Î HU, and S intrasales(FI,fi,ti) in total from all other firms fi’ Î FI. It also gained value, not in the form of money, but in the form of valued stocks, machines, buildings, etc., if investment(fi,ti) is accounted to be positive. Thus, under "minus" all firms’ bills due for the relevant year follow. S intrasales(FI,fi,ti) is the summation of all sales of our firm fi to all other firms fi’, while S intrasales(if,FI,ti) is the summation of all purchases by our firm fi from other firms fi’.

I stress again that, unlike in physical theories, these "second order" defined variables profit and investment or not purely technical terms for the economists, but are claimed to have an interpretation as the concepts "investment" and "profit" as used by the actors in x. The definitions could, in principle, be "wrong". They could, e.g. be violated in a country where capitalism is in an early phase and not yet completely understood by its actors. This, I personally experienced this in Moscow. Russia is not (yet) a CKMS.

Keynes is right in every CKMS that is indeed a KMS. In all other CKMS’s he is wrong, and he expects the latter not to exist in reality..

Now, x is a KMS iff x is a CKMS and for all years ti,

C(ti) = m Y(ti) + a , 0<m<1

so we need to define C and Y, our nontheoretical terms, in such a way that all CKMS’s have a unique C and Y for every year. Then, investment I follows naturally as Y - C and we have all we need to get to work with our theoretical variables. Hence, Keynes needs two more "second order" defined variables, national consumption and national investment of nation x, that are liable to the same requirement or appropriateness to the speech of the actors in x:

C(ti) = S cons(FI,HU,ti)

and

Y(ti)= S income(FI,HU,ti)

Keynes proved rigorously that the part of Y which is not consumed, must, by definition (in our language: in every x Î CKMS) be invested:

" ti (Y(ti) = C(ti) + I(ti))

 

What, finally is the scientific achievement of Keynes here? First,

He clarified considerably the concepts of income, consumption and investment.

Second, and most important,

He proved an interesting and highly surprising theorem (the multiplier effect), from only one single, a priori extremely plausible assumption (the consumption function).

 

In sum: there are two important types of empirical scientific progress, achieved without any testing: one is clarification of concepts and the other is proving interesting empirical theorems. I hope already to have shown that the semantic view not only can help to recommend these issues as research objects for the philosophy of science but also to help arguing that they should have a central place on the agenda, but I will further clarify this by solving some of the problems in the realism-empiricism discussion with reference to the example above.

7. Van Fraassen, Giere, lines, light rays and consumption

The consumption function C=mY+a is a straight line. Straight lines can be said to be, or not to be (possible) light ray paths, depending on the empirical situation (van Fraassen 1987:108, 1991:6). Hence, indeed: depending on the empirical situation, the path of a light ray might be, or not be a consumption function. We need, however carefully to specify the sense in which this is true, and thus run in Giere and van Fraassens’ discussion of interpretation and approximation, we come to talk about "is" and "is isomorphic to" .

National accounting bureaux give us lists of observation triplets of type:

Year C Y
1 C1 Y1
2 C2 Y2
3 C3 Y3

.....

....

.....

Table 1

In what domain are they observed? Keynes, in the quotations in section 5, uses the the terms "community" and "system". It is a real, existing domain D = {FI, HU, TI} satisfying our requirements in section 6, summarised in point 1. of the Appendix. How are they observed? By applying the observation functions in OBS = {rent, wage, interest, cons, intrasales, cost}. Hence there must be a set of ranges R, containing one range only: R = {D }.

Decimals.

Let us compare this with a light ray measurement like in Figure 1:

Image75.gif (3373 bytes)

Figure 1 Little Light Lab (courtesy Säkerheits Tändstickor â )

The sheets have little, but not too little holes, and it is tested whether from the arrow point the light source is visible through the holes. Once this is established, the co-ordinates of the holes are measured by choosing origins 0 and axis distance measurement Y (horizontal) and C (vertical).

This yields observation triplets:

Sheet C Y
1 C1 Y1
2 C2 Y2
3 C3 Y3

....

....

....

Table 2

 

Both Keynes (who in fact does not do so) and rectilinear light propagandists may claim that:

$ (m,a)Î R× R " (C,Y) [C=mY+a]

This means that with the first two sheets (years) they can calculate the theoretical terms with respect to their theories, m and a . Then nervousness starts: would indeed the third, and the fourth, etc. measurement exactly fit the theoretical calculation of (m,a) done with the help of the first two measurements? Never, of course. Both have a list of factors that, they feel surely in advance, causes deviations. Trembling hands and interrupting phone calls have a low status on this list, a high status have such things as diffraction en inflation (a reconstruction of these lists yields Nowak’s significance structure). Let us sharply contrast real science with its ideal representations by depicting below some measuring results and the standard reaction to them by leaders in the field who are chosen as referees of the journals to which the results are submitted:

Image76.gif (3502 bytes)

Figure 2 The referee’s (not the philosopher’s??) judgement of science

 

 

How does the referee of the journal see the relation of theory to reality? Where are truth and realism?

 

8. Theory and reality: the data matrix 1.

Let us be more precise (and realise that mutatus mutandis, for (Candidate) Keynesian Multiplier Systems, one can without further ado substitute (Candidate) Rectilinear Light Systems).

A matrix is a possible data matrix of CKMS iff it is a 4-dimensional matrix of decimals. The number of decimals contained in the matrix, denoted by number(pdm), should equal:

[number(FIÈ HU)*[number(FI)+3*number(HU)]+

[number(FIÈ HU)+1]*number(FI)]*number(TI)

Let me illustrate this with a numerical example of an x with only 2 firms fi1, fi2 and only 2 humans hu1 and hu2: one slice of the resulting 4-dimensional possible data matrix is a transaction page yielding sales, purchases, and production factor bills (rent bill, wage bill and interest bill) :

ti, transactions

fi1

fi2

hu1

hu2

fi1

0

intrasales

(fi2,fi1)

cons(fi1,hu1)

cons(fi1,hu2)

fi2

intrasales

(fi1,fi2)

0

cons(fi2,hu1)

cons(fi2,hu2

hu1

rent

(fi1, hu1)

rent

(fi2, hu1)

0

0

hu1

wage

(fi1,hu1)

wage

(fi2,hu1)

0

0

hu1

interest

(fi1,hu1)

interest

(fi2,hu1)

0

0

hu2

rent

(fi1,hu2)

rent

(fi2,hu2)

0

0

hu2

wage

(fi1, hu2)

wage

(fi2, hu2)

0

0

hu2

interest

(fi1,hu2)

interest

(fi2,hu2)

0

0

Table 3: Transaction page

We have number(FIÈ HU) columns, the 1st dimension, column fi1 lists the value of purchases by fi1 from the other economic agents, etc.. So, row headers contain the sellers, column headers contain the buyers. Thus wage(fi1,hu1) is the value of labour bought from seller hu1 by buyer fi1. We have [number(FI)+3*number( HU)] rows, the second dimension. If we turn the page

we see another matrix, the aggregate bookkeeper’s cost page:

 

ti,cost

fi1

fi2

hu1

hu2

depreciation

fi1

0

cost(intrasales(fi1,fi2))

cost(cons

(fi2,fi1)

cost(cons

(fi2,fi1))

dp(fi1)

fi2

cost(intrasales(fi2,fi1))

0

cost(cons

(fi2,fi1))

cost(cons

(fi2,fi1))

dp(fi2)

 

Table 4: Cost page

 

which has [number(FIÈ HU)+1] columns, listing the cost of purchases by the agents mentioned in the column header to the seller read off in the row header. Sellers can horizontally add their loss of product value as a result of their sales if they add depreciation(fi), which is for every firm fi, accounted in the [number(FIÈ HU)+1]st column. The number of rows equals number(FI).

We have such pairs of transaction and cost pages for all years ti Î TI, so we have number(TI) of such pairs of pages.

In sum, the number of decimals in a possible data matrix for year ti equals

[number(FIÈ HU)*[number(FI)+3*number(HU)]+

[number(FIÈ HU)+1]*number(FI)]

and a possible data matrix over TI years has number(TI) times this amount

of decimals. One row triple (Year, consumption, income) in Table 1 is a compilation (ti, S cons(FI,HU,ti), S income(FI,HU,ti)) for some year ti Î TI, and results from what is most appropriately called a ti-slice of such a possible data matrix (you can cut fi and hu slices too, of course).

 

Now what makes a possible data matrix a real data matrix, correctly resulting from measurement of a real domain D? To arrive at a simple statement of these conditions, let us lump together all functions in OBS into one single function:

obs: ) ® D ^ number(pdm)

where ) is the set of all possible domains D, and "D ^ number(pdm)" refers to the Cartesian product of number(pdm) times the set of decimal numbers (ordered in the 4 dimensional form of a possible data matrix). The function obs tells us how every suitable (existing or not existing) domain D yields just one single unique possible data matrix containing an amount of number(pdm) decimals. We call it a possible data matrix as long as we are not ready to claim that this D really exists.

 

We are still not engaged in considering such a claim of exixtence of D when we define the subset KMS of CKMS:

x is a KMS iff

1. x is a CKMS

2. there is (m,a) such that " ti [C(ti) = mY(ti) + a]

3. 0<m<1

Marginal propensity to consume m and autonomous consumption a are theoretical terms, (calculable only if you have at least two observed pairs (C(ti),Y(ti)) and (C(ti’),Y(ti’)) and only if you assume the equation in 10.2). KMS does not constitute any empirical claims, but it can be used to make them. The multiplier theorem is merely a formal corollary from the definition of KMS, and thus not an empirical claim either:

Multiplier Theorem: if x is a KMS then " ti [D Y(ti)/D I(ti) = k]

, where k=1/[1-m]

 

9. Conclusion: the data matrix 2.

A possible datamatrix pdm is a real (actual) datamatrix of D iff

1. D denotes a real, actual, existing domain.

2. pdm = obs (D)

Unfortunately scientists can have only shaky beliefs about the satisfaction of both conditions. Moreover, to get into a meaningful discussion with real scientists, it is necessary to realise that almost all his problems, hopes and frustrations are hidden behind these two still rather empty phrases.

One example of an empirical claim that could be made with the help of CKMS and KMS is the following:

" D, if D is real and $ pdm[pdm = obs (D)] then (D,obs,R,pdm) is not only a CKMS (what it is by definition) but even a KMS (what it might turn out not to be).

The claim actually made by Keynes, however is considerably weaker. It can be summarised as follows:

If employment is linearly dependent on income Y (which is reasonable) any additional D I(ti) (ti=now,...,eternity), measured in man years, it will raise employment with kD I(ti). This multiplier k is at least far greater then one, at least 3, maybe even 5. Hence every man put at work by the government will put between 2 and 4 other men to work, and not, as everybody thinks, only himself."

Keynes does not believe that there is an (m,a) such that if the domain D is real and pdm = obs(D) then " tiÎ TI [C(ti)=mY(ti)+a] !! In Keynes 1936, Keynes sticks consistently to analysing his multiplier k=1/[1-m] in terms of increments only. So, there is no need for a theoretical term a. For his multiplier effect, a could be anything! For Keynes, plots like Figure 2 make no sense at all to his theory. His method of analysis became explicit most sharply in a debate with Tinbergen (Keynes 1939, Tinbergen 1940, Keynes 1940). Tinbergen started to make plots like Figure two for the League of Nations. Keynes sharply criticised "Professor Tinbergen’s method" (later hailed as the founding of econometrics). Keynes wanted to analyse in terms of D C/D Y=m only, and not in terms of what he calls a "quantitative formula" C=mY+a in the following remarkable passage:

"to convert a model into a quantitative formula is to destroy its usefulness as in instrument of thought. For as soon as this is done, the model looses its generality and its value as a mode of thought" (Keynes 1939:[page]).

Now, he is forced by Tinbergen’s plots to recognise the term a as at least an object of thought. He takes the position that economic relations are inherently unstable: m and a move though time, just as C and Y. It is only because m and a move slower than C and Y that it is meaningful to ponder, for the short run, the consequences of an investment impulse. It makes no sense whatsoever to draw plots as done in Figure 2, because through the number(TI) years that yield the number(TI) points in the plot, both m and a may well have moved up, down, or both, significantly. Keynes compares economics to logic. He claims it can not be a physical science of the society.

What are the hopes of the brave workers in our little light lab of Figure 1, who are generating tables of triples like Table 2? Is Keynes right in thinking that their aims are principally different from his, as principal as physics differs from logic? Do they claim

Because our D is real and because we produce pdm’s such that [pdm = obs (D)], our x’s such that x=(D,obs,R,pdm) are not only Candidates Rectilinear Light System, but even a genuine ones, that is, ones in which, for all pdm’s we produce,

$ (m,a) " sheets [C(sheet)=mY(sheet)+a] ?

We do not know whether this really is claimed, because these light lab workers exist only in my fantasy. I made the fantasy in order to show how van Fraassen’s semantic programme suggests that progress could be made by means of empirical research in this direction, as I have shown I have done with my field work on KMS’s. What seems trivial may empirically turn out to be completely different from what our philosophical self indoctrination makes us expect. Figure 2, I suspect, should lead the way.

Figure 2 is a visualisation of six (derived) datamatrices. One can not say of such a visualisation that it "is" a line, let alone a straight line (this holds even for the picture "Fraud"). Instead, the pictures show finite sets of points defined in a geometrical space. Neither can one say that such a plot "is" not, and "is" in a standard sense, "isomorphic" to a straight line. The question of interpretation, Giere (1985:77), approximation, van Fraassen (1985:289-90), whether someting "is", or "is isomorphic to" something else, van Fraassen (1987:111), concerns other relations between other reconstructive items involved. What items? What relations?

 

The first question is:

1. Is the possible datamatrix pdm measured precisely enough?

But functions D measuring proximity:

D ( [obs(D) - pdm)] )

make no sense because obs is defined as our (exclusive) way to measure the values in pdm, so D’s arguments [obs(D) - pdm)] are zero by definition. In practise, the matter is usually settled by statistically analysing the coherence of a set of related real datamatrices pdm or obs(D), two names for the same things resulting from more or less precisely replicating the experiment. In statistical analysis some real datamatrices turn out to be improbable given the other datamatrices. This may induce lab workers to decide that some real observations have been "not so real after all", for various reasons, reaching from low rank "trembling hands" reasons to high rank "diffraction" reasons. Popper and Lakatos, in their crusade against ad hoc and conventionalism, echoed the lab leader’s adagium that one should try to explain errors in a way enabling their avoidance in the future. The lab is seldomly ordered, though, to concentrate on every error made by anyone in any case. To allow a tolerable degree of "ad hoc" and "it was probably an uninteresting mistake, let’s try again" is not a matter of degeneration, as Lakatos held, but a matter of subtle leadership. That requirement of practical subtlety makes that members of our intellectual trade, philosophy of science, are seldomly appointed as lab directors, nor even as referees of the journals in which the real, as opposed to the philosophical, evaluation of science is done.

The second question is:

2. How to judge whether paths of light rays and consumption functions are straight lines?

The formal aswer has already been given, and is very simple. The following must be true:

" D, if D is real and if $ pdm[pdm = obs (D)] then

$ (m,a)" ti[C(ti) = mY(ti) + a]

Look again at Figure 2. Every ti slice of datamatrix pdm yields a point. Up to two points, if we assume, for both pdm’s [pdm = obs (D)], which nobody can prevent anybody to do, the claim is mathematically true: for one or two points there always is a line going through them. However, if the third point is not on the straight line going through the first two, all problems start at once: first:

2.1. The scientist can now formally prove that, if his function is a straight line, then he made errors in his observations

He already firmly believed that he always makes some errors, but until this moment he could not prove it.

Second

2.2. If he made errors in his observations, then all observations could differ from their "true" obs(D) (where obs(D) is meant to refer to the "true" values in D that everybody, philosophers and lab workers alike, talks about, but nobody has ever seen (quite an unempirical attitude!).

Leszek Nowak, 1985, consistently avoids this unempirical attitude by recognising the triviality that every scientist knows in advance that the upper left graph of Figure 2 will never appear in real science. He takes the Marxian conviction, quite appropriate for the purpose, that figures like Figure 2 depict "surface phenomena". "Deep" theoretical explanations ignore accidental factors and hence offer approximations only; the truth of deep theories lies in their (Hegelian) "essence".

This sorry absence of obs(D) leads scientists who think in terms of straight lines, to estimate "their" line by means of some statistical procedure like ordinary least squares. This yields a pair of estimations (m,a). Now, we finally have the question:

3.Is straight line [C=mY +a] a good approximation of [C=mY+a]?

Here in question 3., (m,a) unproblematically refers to what a statistician has done with a datamatrix supplied by an observer. But (m,a) refers to something highly obscure: to the straight line that would have resulted if actual observation results obs(D) would have equalled the "true" values obs(D), and if obs(D) would yield a straight line. This can be decided, yes, even measured, if it concerns an old theory, and there is a new theory in terms of which old observation errors can be formalized. If it concerns the latest version of the latest theory, it can only be decided (for better and worse, and quite often for worse) by those who have intuitions about the theories of the near future: lab leaders and journal referees. My doubts as to whether they can benefit from our assistence are given by my belief that we cannot be expected to have such intuitions. If we would have them, we should stop our careers as philosophers of science immediately and become genuine first order scientists.

Humanity continuously changes its decisions on what is real. First order scientists are, for better and for worse, the authoritative advisers in this. There is nothing more to say about reality and truth. Unless you met God, in Spinoza’s sense of the Complete Physical Body and the Complete Body of Ideas corresponding to It (Spinoza 1677). And first order scientists have proven, in history, however far from The Absolute and Complete Truth they turned out to be, always to be nearer to It then anyone else at the moment, though always many wasted, and still waste energy in the vanity of attempts to outdo empirical science.

10. The sense of the semantic programme

I hope this paper solidly illustrates what can be the fruits of van Fraassen’s semantic programme:

First of all, its structural reconstructions can (unfortunatly do not always do, but one should never give up) provide highly perspicuous and simple presentations of theories.

Second, it yields most effective formulations of methodological strategies in science: "we want an x, such that...", "we want a restriction in model M such that...".

Third, by detecting structural similarities between theory structure and theory development strategies of groups in science unaware of each other, it can promote contacts and mutual learning.

Hence, I conclude by expressing my hope and expectation that empirical progress, made in the framework of van Fraassen’s semantic programme, of our knowledge of science is not only useful to philosophical "science watchers" and philosophical interpreters of the sciences into the different available isms, but also potentially for the efficiency of scientific teaching and scientific communication itself.

 

 

Appendix 1: Empirical models CKMS (Candidate Keynesian Multiplier Systems), and Theoretical models KMS (Keynesian Multiplier Systems).

1. Domain D = {FI, HU, TI}

fi Î FI (firms)

hu Î HU (humans)

ti Î TI (time period, year)

2. Primary range R = { D }

D (decimal numbers, all values in wage units per year)

3. Set of observation relations OBS = {rent, wage, interest, cons, intrasales, cost}

rent: FI× HU× TI ® D [rent(fi,hu,ti): "rent paid by fi to hu during ti"]

wage: FI× HU× TI ® D [wage(fi,hu,ti): "wage paid by fi to hu during ti"]

interest: FI× HU× TI ® D [interest(fi,hu,ti): "interest paid by fi to hu during ti"]

cons: FI× HU × TI ® D ["consumption by hu bought from fi during ti"]

intrasales: FI× FI × TI ® D [(fi’,fi,ti): "value of purchases by fi’ from fi during ti"]

depreciation: FI× TI ® D [(fi,ti):"depreciation of fi’s fixed capital during ti"]

cost: cons (FI× HU × TI) ® D ["cost of consumption bought by hu from fi during ti"]

cost: intrasales (FI× FI × TI) ® D [(fi’,fi,ti): "cost of purchases by fi’ from fi during ti]

 

4. A matrix is a possible data matrix of CKMS iff it is a 4-dimensional matrix of decimals. The number of decimals, denoted by number(pdm), should equal:

[number(FIÈ HU)*[number(FI)+3*number( HU)]+[number(FIÈ HU)+1]*number(FI)]* number(TI)

A

of a possible data matrix is the submatrix containing only numbers referring to year ti.

5. Theorem: all functions in OBS can be lumped together into one single function:

obs: ) ® D ^ number(pdm)

where ) is the set of all possible domains D, and D ^ number(pdm) refers to the Cartesian product of number(pdm) times the set of decimals (ordered in the 4 dimensional form of a possible data matrix).

6. Empirical models: x is a CKMS iff

6.1. x is (D,obs,R,pdm) as in 1. to 5.

6.2. pdm = obs (D)

6.3. N.B. D is not required to be real or existing!

7. Set of defined variables DEF = {investment(fi,ti),profit(fi,ti),income(fi,HU,ti),C(ti),Y(ti),I(ti)}

For all fi Î FI and ti Î TI:

investment(fi,ti):= wage(fi,ti) + rent(fi,ti) + interest(fi,ti) + S intrasales (Fi, fi,ti)

minus

cost (S cons (fi,HU,ti))+cost (S intrasales (FI,fi, ti))+ depreciation(fi,ti)

profit(fi,ti) : = S cons (fi,HU, ti) + S intrasales (FI,fi, ti) +investment(fi,ti)

minus

wage(fi,ti) + rent(fi,ti) + interest(fi,ti) + S intrasales (fi,FI, ti)

income(fi,ti) : = S wage(fi,HU,ti) + S interest(fi,HU,ti)+S rent(fi,HU,ti) + profit(fi,ti)

For all ti Î TI:

C(ti) : = S cons (FI,HU, ti)

Y(ti) : = S income(FI,ti)

I (ti) : = S investment(FI,ti)

8. Theorem of the National Account: if x is a CKMS then " ti [Y(ti) = C(ti) + I(ti)]

9. x is a KMS iff

9.1. x is a CKMS, that is x is a (D,obs,R,pdm) as specified in 1. to 5.

9.2. for pdm there is (m,a) such that " ti [C(ti) = mY(ti) + a]

9.3. 0<m<1

9.3 N.B. D is still not required to be real or existing!!

Marginal propensity to consume m and autonomous consumption a are theoretical terms, (calculable only if you have at least two observed pairs (C(ti),Y(ti)) and (C(ti’),Y(ti’)) and only if you assume the equation in 9.2).

10. Multiplier Theorem: if x is a KMS then " ti [D Y(ti)/D I(ti) = k] , where k=1/[1-m]

12. A possible datamatrix pdm is a (real, actual) datamatrix iff

12.1. D denotes a real, actual, existing domain.

12.2. pdm = obs (D)

13. Empirical claim (not endorsed by Keynes): " D, if D is real and if $ pdm[pdm = obs (D)], then x = (D,OBS,R,pdm) is not only a CKMS (which it is by definition) but even a KMS (which might be false).

12. Conclusion by Keynes:

If employment is linearly dependent on income Y (which is reasonable to assume), and if the values of (m,a) changes more slowly through time than the values of Y and C, then any additional D I(ti) (ti= now,...,eternity), measured in man years, it will raise employment with k D I(ti). This multiplier k is at least far greater then one, at least 3, maybe even 5. Hence every man put at work by the government will put between 2 and 4 other men to work, and not, as everybody thinks, only himself.

Appendix 2: Notation

I have a slightly personal notation, but it proves highly effective in this context:

f(x) is, as usual, the range value in the range of f, of domain element x Î X (X is the domain of f).

f(X) refers to {f(x)ç xÎ X}

S f(X) refers to S f(x) for all xÎ X

S g(X,y), y=1,..,k refers to k such sums, for every y

S g(X,y), yÎ Y refers to all sums for every y

A function g: f(X) ® D is mapping the set of all f(x), xÎ X into the set of decimal numbers

I liberate the reader from memorising the meanings of function letters f, g, h, etc. by consistently using cursivated mnemonics consisting of many letters.("intrasales(x,y,z)" instead of "f(x,y,z)")

One mnemonic metafunction name is number: number(S) refers to the number of elements in set S. If S is a matrix, do not mix this up with the number of possible matrices: in a 2× 2 matrix M of decimals, number(M) is 4, but the number of the set all all possible matrices M is infinite

Unordered sets are between {}, ordered sets between ( ) and mathematical operations, like in [x+y] and equations [x=y], if necessary, between [ ]

× Onyx BT

Literature

Balzer, Wolfgang (1982) Empirical claims in exchange economics in: Stegmueller, Wolfgang, Wolgang Balzer and Wolfgang Spohn (eds) 1981 16-40

Balzer, Wolfgang and Bert hamminga (eds), (1989) Erkenntnis Vol 30,nr. 1-2. Philosophy of Economics. Reprinted as Balzer, W. and hamminga, B. (1989) Philosophy of Economics, Dordrecht, Kluwer Academic Publishers ISBN 0-7923-0157-9, 270 pag.

Balzer, Wolfgang and Sneed, Joseph D. (1977) Generalized Net Structuresof Empirical Theories, Part I. In: Studia Logica Vol.XXXVI, 3,pp.195-211

Churchland P.M. and C.A.Hooker (1985) Images of Science, Chicago, London: Un. of Chicago Press

Cools, Cornelis (1993) Capital Structure Choice: Confronting (Meta) theory, Empirical Tests and Executive Opinion. Tilburg:Gianotten.

Cools, Cornelis, Bert hamminga, and Th.A.F.Kuipers (1994) Truth Approximation by Concretization in Capital Structure Theory.In: B. hamminga and N.B. de Marchi (eds) , Idealization VI:Idealization in Economics. Poznan Studies in the Philosophy ofScience and the Humanities, Vol 38. Amsterdam, Atlanta:Rodopi, pp. 11-41.

de Marchi, N.B. (1976) Anomaly and the Development ofEconomics: the Case of the Leontief Paradox. In Latsis S. (ed)Method and Appraisal in Economics. London: CambridgeUniversity Press, pp.109-128.

de Marchi, Neil B. (ed) (1992) Post-Popperian Methodology of Economics, Recovering Practice, Boston, Dordrecht, London: Kluwer

Diederich, Werner (1982) A structuralist reconstruction of Marx's Economics in: Stegmueller, Wolfgang, Wolgang Balzer and Wolfgang Spohn (eds) 1981 145-60

Freedman, D.A. et al. (1991) Statistics,

Garcia de la Sienra, Adolpho (1982) The basic core of the Marxian economic theory in: Stegmueller, Wolfgang, Wolgang Balzer and Wolfgang Spohn (eds) 1981 118-44

Giere, Ronald N. (1985) Constructive realism in: Churchland P.M. and Hooker C.A. (eds) 1985 75-98

Haendler, Ernst W. (1982) Ramsey-elimination of utility in utility maximizing regression approaches in: Stegmueller, Wolfgang, Wolgang Balzer and Wolfgang Spohn (eds) 1981 41-62

hamminga, Bert (1983) Neoclassical Theory Structure and TheoryDevelopment, Berlin:Springer,.

hamminga, Bert (1989) Sneed versus Nowak: An Illustration inEconomics". In Balzer, W. and hamminga, B. (eds) Erkenntnis, Vol. 30, nr. 1-2.

hamminga, Bert (1989/90) The Structure of Six Transformationsin Marx's Capital. In: Brzezinsky J., Coniglione F., KuipersTh.A.F. and Nowak L., Idealization I: General Problems. PoznanStudies in the Philosophy of the Sciences and the Humanities,Vol 16, Amsterdam, Atlanta: Rodopi, pp. 89-111.

hamminga, Bert (1992) Learning Economic Method from the Invention of Vintage Models. In: De Marchi N.B. (ed) 1992 327-54.

hamminga, Bert (1995) Interesting Theorems in Economics" in:Kuipers Th.A.F. (ed), Cognitive Patterns in Science and Common Sense, Amsterdam: Rodopi p. 227-239.

hamminga, Bert, (1982) Neoclassical Theory Structure andTheory Development", in: W. Stegmueller, et al. (ed) Philosophy of Economics, Berlin etc.: Springer 1-15.

hamminga, Bert, (1983c) The intuitive goal of the economist'sendeavours", 7th International Congress of Logic, Methodology and Philosophy of Science, Abstracts of Section 11, Salzburg.

hamminga, Bert, (1989a) Sneed versus Nowak: An Illustration inEconomics", Erkenntnis, Vol 30, nr. 1-2, pp. 247-265.

hamminga, Bert, (1992) Learning Economic Method from the Invention of Vintage-Models", in: de Marchi, N.B. (ed), Post-Popperian Methodology of Economics: Recovering Practice, Kluwer, pp. 327-354.

hamminga, Bert and Balzer, W. (1986) The Basic Structure ofNeoclassical General Equilibrium Theory. In: Erkenntnis, Vol25, pp. 31-46.

hamminga, Bert and N.B. de Marchi (1994) Idealization and the Defense of Economics: Notes Toward a History" in: Idealization IV: Idealization in Economics, Poznan Studies in the Philosophy of the Sciences and the Humanities, Vol. 38, p. 11-41, Amsterdam-Atlanta GA, Rodopi.

Haslinger, Franz (1982) Structure and problems of equilibrium and disequilibrium theory in: Stegmueller, Wolfgang, Wolgang Balzer and Wolfgang Spohn (eds) 1981 63-84

Janssen, Maarten J. (1989a) Structuralist reconstruction of classicaland Keynesian macroeconomics", in: Balzer and hamminga (eds.) (1989) , 165-181.

Janssen, Maarten J. and Th.A.F. Kuipers (1989b) Stratification of generalequilibrium theory: a synthesis of reconstructions" in Balzerand hamminga (eds.) (1989) , 183-205.

Keuzenkamp, Hugo A. (1994) Probability, econometrics and truth, Dissertation Tilburg University, Department of Economics

Keuzenkamp, Hugo A. and Anton P. Barten (1995) Rejection without Falsification, on the history of testing the homogeneity condition in the theory of consumer demand, Journal of Econometrics 67

Keuzenkamp, Hugo A. and Jan R. Magnus (1995) On tests and significance in econometrics,Journal of Econometrics 67,2-24

Keynes, J.M. (1936) The General Theory of Employment,Interest and Money, Cambridge, 1973 repr.

Keynes, J.M. (1939) Official Papers. The League of Nations. Professor Tinbergen's method. The Economic Journal, sept 558-68

Keynes, J.M. (1940) Comment [On Tinbergen 1940] The Economic Journal, March 54-7

Kuhn, Thomas S. (1962) The Structure of Scientific Revolutions,Chicago, 1970, Sec. ed.

Kuipers, Th.A.F. (1982) Approaching descriptive and theoretical truth", Erkenntnis, 18, 343-387.

Kuipers, Th.A.F. (1984) Approaching the Truth with the Rule of Success. In: Philosophia Naturalis Vol.21, pp. 244-253.

Kuipers, Th.A.F. (1985) The Paradigm of Concretization: the Law of van der Waals", in: J. Brzezinski (ed) Consciousness: Psychological and Methodological Approaches, (Poznan Studies in the Philosophy of the Sciences and the Humanities ???), Amsterdam:Rodopi, pp. 185-199.

Kuipers, Th.A.F. (1987) A structuralist approach to truthlikeness", in What is closer-to-the-truth?, (ed. T.Kuipers), Poznan Studies, Vol. 10, 79-99.

Kuipers, Th.A.F. (1992) Naive and refined truthapproximation. In: Synthese Vol.93, pp. 299-341.

Kuipers, Th.A.F. (1992a) Truth approximation by concretization",in Idealization III: Approximation and Truth (ed. J.Brzezinski and L. Nowak), Poznan Studies, Vol. 25, 159-179.

Leontief, W. (1956) Factor Proportions and the Structure ofAmerican Trade: Further Theoretical and Empirical Analysis.In: Review of Economics and Statistics, Vol 38, pp. 386-407.

Leontief, W. et.al. (1953) Studies in the Structure ofthe American Economy, New York.

Nersessian Nancy (ed) (1987) The Process of Science: contemporary philosophical approaches to understanding scientific practise, Dordrecht, etc.: Nijhoff

Neyman Jerzy and E.S. Pearson (1967) Joint statistical papers, Cambridge: Cambridge Un.Pr.

Nowak, L. (1971) The problem of Explanation in Marx'sCapital" Quality and Quantity Vol. V, No 1.

Nowak, L. (1974) The Problem of Explanation in Karl Marx' Capital" Revolution World, Vol. VIII.

Nowak, L. (1975) Idealization: A Reconstruction of Marx' Ideas" Poznan Studies I, 1 p. 25-42.

Nowak, L. (1985) The Structure of Idealization. Dordrecht:Reidel.

Pearce, David and Michele Tucci (1982) A general net structure for theoretical economics in: Stegmueller, Wolfgang, Wolgang Balzer and Wolfgang Spohn (eds) 1981 85-102

Sneed, Joseph D. (1971) The Logical Structure of MathematicalPhysics. Dordrecht, Reidel.

Sneed, Joseph D. (1982) The logical structure of Bayesian decision theory in: Stegmueller, Wolfgang, Wolgang Balzer and Wolfgang Spohn (eds) 1981 201-222

Stegmueller, W., Balzer, W. and Spohn, W. (ed) (1981) Philosophy of Economics. Berlin etc.:Springer.

Stegmueller, Wolfgang, Wolgang Balzer and Wolfgang Spohn (eds) (1982) Philosophy of Economics, Berlin, Heidelberg, New York: Springer Verlag

Stegmueller, Wolgang (1973) Logische Analyse der Struktur ausgereifter physikalischer Theorien. 'Non-statement view von Theorien, Heidelberg, New York: Springer Verlag

Stern, R.M. (1975) Testing Trade Theories", in: Kenen (ed) InternationalTrade and Finance, Frontiers for Research, Cambridge (Mass.), 1975.

Suppes, Patric C. (1957) Introduction to logic, New York: Van Nostrand

Suppes, Patric C. (1960) A comparison of the meaning and uses of models in mathematics and the empirical sciences, Synthese XII, No 2/3 287-301

Suppes, Patric C. (1962) Models of data, in Logic, methodology and philosophy of science: proceedings of the 1960 international congress, Stanford Cal.: Stanford Un.Pr.

Tinbergen, J. (1940) On a method of statistical business cycle research. A reply. The Economic Journal, March 41-54

Townshend,H. (1936) Effective demand and expected returns, note attached to letter to Keynes, Moggridge, D.(ed) John Maynard Keynes, Collected Writings Vol XXIX,The General Theory and after, a supplement, Cambridge: Cambridge Un.Pr. 1979

van Fraassen, Bas C. (1985) Empiricism in the philosophy of science in: Churchland P.M. and Hooker C.A. (eds) 1985

van Fraassen, Bas C. (1987) The semantic approach to scientific theories in: Nersessian 1987 105-24

van Fraassen, Bas C. (1991) Quantum Mechanics, An Empiricist View, Oxford: Clarendon

Weintraub, Roy E. (1992) Commentary [to hamminga 1992], in Neil B. de Marchi (ed) 1992 355-73

[witness expert discussion, Wilhelm Wund 1880, Edward B. Titchener 1924, Experimental psychology, a manual, London, Macmillan 1924; wedloop tussen meetmogelijkheden en ambities]