Nova workboard

a blog from young economists at Nova SBE

Bolsa Família: Welfaring out of poverty


Bolsa Família is a social welfare program in Brasil, part of a bigger network of measures to end poverty called Fome Zero, introduced by the past Brazilian President Lula da Silva. Although it now has been expanded by Dilma Rousseff with Brasil Sem Miséria program, I focus on Bolsa Família itself.

The mentioned program is a CCT (Conditional Cash Transfer)  which are well-know programs for (development) economists, where you usually have a transfer for some type of individuals, conditional on them doing something (it could be, taking your children to the hospital regularly, taking them to school, and so on). This conditionality is needed to avoid misusage of the cash transfer. Bottom line it is used to align the incentives of parents to make sure that their kids really go to school.

Bolsa Família is based on education, and in short-terms it gives a certain amount of money to poor families that make sure their children: Go to school; Skip little or no classes at all; Are vaccinated.
Although CCTs are very common in Latin America, there are not many projects of this kind that have been analyzed systematically and methodologically, ex-ante and ex-post, so this one is perfect to be considered from a microeconomic perspective. Moreover, it is the largest one of its kind, in the world. For example, Hoffman, 2006 (in portuguese) studied how much the total inequality decrease in Brazil, between 1997 and 2004, was due to this program, and concluded that in many regions it was close to 28%. In the poorest region, Northeast, this effect reached 66% of the reduction in inequality, and could even reach higher values (87%) if we consider a short time period. In what concerns fairness the program did very well: It decreased poverty and inequality dramatically.

The problem can, however, rely on the efficiency of the program, in pure economic terms. This type of programs are not Pareto Efficient, for sure, since the money that is given to these families comes from somewhere, usually taxation, so not only some people have now less money than they had, but there is some deadweight loss (excess burden) associated with the taxes, so we lose some feasible and optimal resource allocations. The first thing that we can argue is that not only is redistribution desirable, but when you have 8.5% of population below the extreme poverty line, even with the cost of some DWL, we will end up in a higher Social Surplus level than before(this of course, underlines some marginal utility of money assumption, i.e. someone very, very, very poor, will get more utility from 1 extra euro, than Bill Gates will. Controversial assumption). However, I think that a 2 period analysis of this program is too limited. The biggest advantage of CCTs of this kind is its investment in human capital. Most of the children that are going to school are the first generation in their families to do so, and as we know, educational level is highly correlated with wages, so these kids will be, on average, better-off than their parents. These repeated but limited transfers (not one-time transfers, but an extension of it), can take, and took out, millions of people out of poverty, and this effect could persist for generations, so we end up with a inter-generational higher utility level! Not to mention side effects such as: the efficiency gains from having more educated people in the labor force; decrease in crime rates due to less poverty and less educational dropout, and so on and so forth.

These lump-sum transfers could help millions of people escaping the so-called poverty trap, while helping Brazil getting more competitive.

Sérgio Rocheteau #620


Driving Speed as an Externality


We often notice how many drivers tend to disrespect speed limits, while the existence of these limits means the State wants drivers to abide by them. This disparity can be seen as an externality, which arises when actions taken by one agent affect the welfare of others, without this being reflected in the decision of the former.

In general, drivers face a trade-off between safety and speed. Since the same increase in speed is more significant the lower the speed level, marginal private benefit (MPB) of speed is decreasing. Regarding marginal and social private costs (MPC and MSC), we can consider a positive slope. This can be justified by the fact that increasing speed not only increases the likelihood of an accident to happen, but it also increases the severeness of such an accident, in the event that it happens.




The socially optimum level of speed is S*, where MPB = MSC. However, if there is no mechanism to correct the externality, the driver will choose S1, where MPB = MPC. Since S1 is different from S*, there is space for intervention. The goal here will be to make it more costly for drivers to overspeed. Ideally, the State should try to increase the MSC by the amount necessary to make it equal to the MPB at the socially optimal level of speed. Graphically, this means an increase in the MPC curve slope. There are several ways to achieve this, and they may be complementary to each other. Two of the most relevant will be explained.

First, by imposing a speed limit that reflects the socially optimal level, and imposing fines to those that do not respect this limit, the State can significantly increase the MPC for users that exceed the limit and, with this, deter them from doing so. For this to be effective, the value of the fine, adjusted to the probability that drivers expect to be caught in an infraction, has to be high enough so that it exceeds the marginal benefit of exceeding the limit. In general, the effectiveness of this method will increase with the fine value and with the number of speed controls. Since what matters is how drivers perceive the frequency of controls to be, it makes sense that law enforcement authorities tend to advertise increases in controls. (example here, in Portuguese).

Another way to reduce speeding is caused by incentives that insurance companies have to make their clients drive safer. An insurance contract opens way to moral hazard, i.e., since the insurance company cannot directly observe the actions from the driver, he has an incentive for being less careful than he would otherwise. The role of insurance companies in reducing excessive speeding is through the design of contracts aimed at reducing this moral hazard. Everything else constant, insurance companies will want drivers to reduce the risk they take. While their goal is not necessarily to bring drivers to the socially optimum level, they want to increase driver’s perception of the MPC, so their actions will result on a reduction of driving speed.

To sum up, if we think of speeding as a negative externality, actions should be taken to make its value closer to the socially optimal. While there are different ways to do this, all of them can be translated as an increase in the MPC of speeding, so the goal is for consumers to internalize the externality in their decision making process.


João Araújo no. 638



Is the program to reduce civil servants efficient?

One of the measures that the Portuguese government wants to implement is the reduction of the number of civil servants. To do so, it has created a program where workers can leave by mutual agreement, while receiving a monetary compensation for that (link here, in portuguese). The aim of this post is to discuss whether this program is an efficient way to reduce the number of civil servants.

To illustrate this program, a labour-leisure framework can be useful. In the following analysis, it is assumed that workers have the same preferences over Income and Leisure. All variables considered here should be interpreted as a present value of all their future amounts. This implies that, even if a worker expects not to work for some period after leaving the State – receiving, for instance, unemployment benefits – as long as he expects to start working after some time (e.g., when he stops receiving those benefits), this will be depicted in the graph as a positive amount of labour. Wage (ω) is used to represent not only nominal wage, but it also accounts for factors such as different uncertainty levels faced in the private – where workers that enter the program are expected to move – and public sectors. The current ω is denoted by ω S, while ωP stands for what workers expect to earn if they move to the private sector. This ωP  varies among workers, such that ωPH > ωPL. Furthermore, ωS > ωp : ωp  is the opportunity cost of working for the state so,  if it was larger than ωs, civil servants would leave voluntarily, turning this measure unnecessary.



If a worker starts at A, leaving his job without compensation leaves him at a point like B (if H type), where he is worse-off. For him to be willing to accept this change, he must receive a compensation that makes him at least as well-off as before. This amount is given by the compensating variation (CV), D – IH and C – IL in the figure, for the different types of workers, that will end at a point like FH or FL. One important aspect here is to note that, the higher ωP  is, the smaller the compensation required. This means that, for any given compensation level the government decides, the workers that are going to leave their jobs are the ones valuing their current job less, relatively to the alternatives. In this sense, this measure maximizes the total welfare of civil servants, when compared with other ways of reducing their number. This happens because, with this mechanism, workers more willing to leave the State are the ones who will while, for instance, if it was the government to decide who would leave, this might not verify.

A possible drawback of this approach, from the point of view of the public services, is that, if the valuation workers make of their alternatives increases with their productivity, i.e., the best workers expect to earn more at a new job, this process may have the effect of making the most productive workers leave, worsening the quality of the public sector.

In sum, it is possible to analyse and compare several ways for the government to reduce employment – leaving aside the debate on whether this is optimal – and see that the current program of mutual agreement rescission may be the most favourable for the workers, since those who will stay in the State are those who value it more.

João Araújo, no. 638

The irrationality within us

We are always thinking of human beings as rational creatures. In fact, reasoning is what sets us apart from other animals. In Economics, this idea is present in the theory of rational expectations which proposes that a market of rational actors will value things correctly. However, experimental evidence shows that we may not be as rational as we think.

Theories of rationality have provided a powerful framework for the modeling of microeconomic decisions. From this rises the characterization of preference and resources allocation as a utility maximization problem. Utility-based theories of rational decision-making have a number of useful features; most importantly such models make it possible to translate conceptually vague preferences into quantifiable units.

Although improvements on the traditional model have been made, considering several constraints, most economists have remain agnostic about the roots of utility and, moreover have been ignoring predictable variations linked to specific goals and life history that, ultimately, have extremely important influences on how people allocate their limited resources. Dismissing these of the equation leaves an incomplete accounting of rational decision-making.

In Keynes point of view, economic fluctuations are largely driven by “animal spirits” that are more easily explained by psychologists than by economists.

Given this, economic psychologists have questioned the classical economic model generating a multitude of findings that challenge the assumptions there present. Findings supporting that slight variations in the decision frame can lead to tremendously different evaluations were found. Also, demonstrations of suboptimal decisions raise the doubt of what exactly makes a “rational” decision.

Also in this regard, Akerlof and Shiller’s (in their book Animal Spirits: How Human Psychology Drives the Economy, and Why It Matters for Global Capitalism, 2009) explore various types of “animal spirits” and explain how they affect the economy. The authors attempt to restore animal spirits to the economic theory by drawing on the greater understanding of human psychology that exists today in order to explain our irrationality.

Animal spirits are, essentially, human emotions that cannot be turned off – are inherent to human beings. Unconstrained, they drive the economy into improper “booms”. On the other hand, mitigated by government, “they are a great source of entrepreneurial energy, safely channeled into a healthy capitalism”.

Concluding, the classical view of rationality as the maximization of expected utility may not explain the way that most individuals make their decisions, being the key to “revamp economic theory to deal with a market system that, quite irrationally, most disturbingly, failed to govern itself”.

Rita Azevedo


Failure of General Equilibrium Theory

In the development of its long decades over the discipline of economics, general equilibrium is usually regarded as the ultimate formalization of economic theory and established as the fundamental framework for theoretical dissertation. General equilibrium theory is generally mentioned as providing the rigorous theoretical version of Adam Smith’s invisible hand and demonstrating the desirable properties of a competitive economy.

Yet there is much controversy concerning general equilibrium theory. Some authors have raised questions about its significance for economic theory, Ackerman went further and stated that general equilibrium is “not exactly alive and well any more”. The controversy on this issue seems to stem from the fact that although general equilibrium theory is valid in a highly formalized way, there is some question as to whether it possesses empirical content.

The best-known results of general equilibrium theory are the two theorems proved by Kenneth Arrow and Gerard Debreu. First, under familiar assumptions defining an idealized competitive market economy, any market equilibrium is a Pareto optimum. Second, under more restrictive assumptions, any Pareto optimum is a market equilibrium for some set of initial conditions. There is a long-lasting debate about the interpretation of such results, in light of the lack of realism of some of the assumptions made. For example, nonconvexities, such as increasing returns to scale in production, are common in reality and, if allowed into the theory then the existence of an equilibrium is no longer certain, and a Pareto optimum need not be a market equilibrium.

The second fundamental theorem is often interpreted as that any efficient allocation of resources could be achieved by market competition, after an appropriate lump-sum redistribution of initial endowments. In reality, this interpretation may be debatable, given the conditions assumed in the proofs which do not apply in real life, and even if they did, the application of the Arrow-Debreu theorems would require dynamic stability. Considering the process of redistributing initial resources and then letting the market achieve a new equilibrium suggests that the new equilibrium is both unique and stable. But if, for instance, the equilibrium is not unique, one of possible equilibrium point might be more socially desirable than another, and the market might converge toward the wrong one. If, on the other side, the equilibrium is unstable, the market may never reach it, or might not stay there long if surprised by random events.

In conclusion, the equilibrium in a general equilibrium model is not necessarily either unique or stable in real life and there are apparently no grounds for dismissing not well-behaved outcomes as implausible special cases.

Rita Azevedo


VAT vs IMI – Portuguese Case

In July 2012 and then in May 2013, OECD advised Portugal that it would be better to increase IMI (Property Tax) rather than VAT (Value Added Tax) because it would be a fairer measure and better for the economic growth.

I have decided to analyse this from the consumers’ point of view. In order to do it, I will divide the consumers in three groups: (1) households who are property owners of extremely valuable houses because they have a higher income so they can afford the costs of having such a accommodation, (2) households who are property owners with a medium income and (3) households who are not property owners, i.e., households with low income.

To a simpler and more understandable analysis, we are going to assume (i): that consumers’ utility is measured by the quantity of goods they can afford with their income.

According to INE’s statistics, in the first trimester of 2009, 40,6% of the total number of workers received a net salary of less than 600€ and only 12,4% earned more than 1200€. With all the austerity measures to reduce wages in Portugal since 2009, families’ income may be even less today. Note that we are talking about the total number of workers but we cannot forget that in the second trimester of 2013, the unemployment rate was 16,4%. Nevertheless, according to CENSOS 2011, 73,5% of the total number of housings are inhabited by the owners themselves. Then we can conclude that the majority of Portuguese population is included in group (2).

Besides this, medium-low income families affect a higher proportion of their income on consumption than high income families which means that the majority of Portuguese families will have their welfare affected if their consumption is affected so it would be better for these families if the government amended IMI rather than VAT.

Imagine that the government increases VAT (which higher rate is already 23%). The perception that consumers of groups (2) and (3) have with this policy is that those goods they were used to buy are now more expensive so they will buy less because their purchasing power decreases (income effect) and they will choose cheaper goods (substitution effect). Therefore, their utility will decrease. Note that, in absolute terms, high income families will consume much less than group 2 and 3 but in relative terms, they (group (1)) will be less affected by this decision because consumption is a smaller proportion of their income or wealth so their marginal utility is lower.  This means that groups 1 and 2 will be the most unprivileged parts.

If, otherwise the government increases IMI (which is now 0,8% tops), this will affect groups (1) and (2). But the value of group (1)’s housings are higher, i.e., the incidence base is higher so group (1) will pay more than group (2) in absolute numbers – assuming that accommodation’s value is directly related to family’s income. But, again, in relative terms, IMI has a higher impact in group 2´s income because it is lower. Moreover if IMI increases, the consumption of group (1) will not decrease because the portion affected to consumption will not change and the consumption of group (2) will decrease but not immediately or in a big quantity because, as said, the rate will be appropriate to the value of the house which is assumed to be proportional to income. Finally, the consumption of group (3) will not decrease at all. If we apply our assumption (i) here, we would say that neither utilities of groups (1), (2) or (3) are much affected because the consumption would not decrease exponentially. Although, in fact, the utility of group (2) may decrease because their income will be lower – but not so much as if the policy instrument was the VAT.

Thus, if one talks about aggregate consumers’ welfare, one can say that the impact of a variation in VAT is bigger than the impact of a variation in IMI and as IMI taxes people according to their yield, it is a fairer tax so the system will be fairer as the OECD declared.

Note that, in this case, the decision of increasing IMI does not imply a reduction in VAT because the main goal is to increase government revenues. The only question is finding the most efficient way of doing it minimizing the impact in aggregate households’ utility. As group 1 is the majority of Portuguese people, it is better to amend the IMI because it does not have an immediate impact on consumption and, on the other hand, it does not affect the majority so the aggregate consumer surplus will not change as much as if the government amended VAT. In conclusion, the OECD advice will make consumers better off (in comparison with the measure to amend VAT).

Maria João Azevedo #639 



Buying Cheap

Over the last years, we became used to “buy cheap”. Worse, we became used to think that buying cheap was an amazing progress of industry. And it would be, maybe, if we lived in a world where Central Bank emitted money every time a firm opened. Otherwise, how could firms still increasing their output and selling almost everything?

Let us firstly understand where these cheap products came from. When countries with cheap labor started to join the World Trade Organization, many companies decided to transfer their production departments in order to get a cheaper production and then higher profits. That was the original idea. Nevertheless, when a company transfers its department to somewhere else, it spreads information about its know-how. So, they were teaching others how to produce their products. Thus, when these countries with cheap labor such as China or Taiwan started to know how to manufacture, they started to make their own companies copying the model of other brands.

With the increase of competition, prices went even lower and were set more closely to the international price. Hence consumers were better off because they had their opportunities of consumption increased.

But, what about the other companies all around the world? Was this good to the supply side too? Unfortunately we cannot say it was. Many countries have seen their companies going bankrupt and unemployment have increased a lot in manufacturing industries. We can see this in the USA case: today USA has a trade deficit with China that represents almost one quarter of the total trade balance. This is the results of importing too much or exporting to little. Between 1990 and 2000, China exportations to the USA increased 880% and USA exportations to China increased almost 230%. We can see the difference. This was exactly the problem, as China started to rise in international trade becoming specialist in what other countries used to be, the world faced an disequilibrium because the other countries have not reallocate resources.

David Ricardo, a classical economist, in his principle of comparative advantage explained how trade can benefit all parties such as individuals, companies, and countries involved in it, as long as good as are produced with different relative costs. So, we can have gains from international trade as long as we became specialised in product in which we have comparative advantages. Thus, if another country appears in the scenario and It can produce in a more competitive way, we have to reallocate our resources in order to produce the thing where we are competitive. This is what countries should have done. Otherwise unemployment would rise. And it did. For example,in Portugal. From 1988 to 2006, manufacturing industries, mainly textile and footwear industries, left 250 000 unemployed people.

Besides, reallocation is a very costly process and cannot be done overnight. It has many difficulties. For example, sometimes, people work in a sector all their lives and they cannot do anything else, ei, they are specialised in that job. Finding a new one in such conditions is much more difficult. We could think that people that can not get a job in one place just go to another but people rarely change place (Peter Huber, 2004)

What I mean is that countries have to find other ways to survive, they have to innovate or this will happen every time an under developed country becomes a developing country. Portugal could have become specialised in producing cotton or wool because China needs these commodities to produce textile. And USA could have specialised in producing machinery that is also necessary for that type of industry.

Notice that last years we have been assisting to a change in textile industries in most developed countries: they change their target. They now produce high quality products in order to reach people with higher power purchase, i.e., success through differentiation. One can see that this have been happening in Portugal and nowadays there are lot of companies that have committed to vertical differentiation producing products with much higher quality. Even when this type of trade could not survive with just Portuguese consumers, most of these companies have a website where consumers all around the world can choose and import products.


Maria João Sá de Azevedo #639


Should taxes on consumption be all equal?

While imposing a unique tax rate might look as a fair and logic measure, by analysing our tax system we can see that taxes on consumption are not equal across all goods. Why so?

A good point to start is to see the effect of a tax in the market where it is instituted and in the other markets. We can say that introducing a tax distorts consumption decisions and, thus, implies a dead weight loss. The market where the tax is introduced may not feel strongly the effect of its increase in price, but the other markets are also affected, since disposable income can’t purchase the same basket as before. But taxes can be used, p.e., as a way of redistribution and therefore, depending on society’s preferences, introducing a tax with a given goal may increase society’s welfare. It raises the question of whether taxes are the best way of promoting equality or if they should be avoided, an interesting question for another post.

Bearing this in mind, we should now see, for a given amount of money we pretend to collect, what should be the way we collect it (assuming, for simplicity, we collect all the money through consumption taxes): through an equal tax rate to all goods or through differenced taxes across sectors and specific goods?

I will present three cases that provide space for differentiating taxes across particular goods: i) different demand elasticities, ii) basic/superfluous goods, iii) externalities.

The first reason assumes that our aim is to minimize the total dead weight loss. Using the Ramsey rule, we know that “tax rates on goods should be inversely related to their elasticity of demand”[i], which is the same as saying goods with comparatively more elastic demands should be less taxed.

The second  reason, the existence of goods that are very important to the low-income population comparing to goods that are more superfluous, seems to be straight-forward. Let’s take bread as an example. Bread is a good from each low-income families depend, for its nutritional content[ii] relatively to its price. If the price of bread goes up, these families will suffer a lot from it; on the other side, rich families will be relatively immune to it. Now suppose that instead of taxing bread, it is decided to tax jewellery. It will have an effect in the welfare of high-income families, but not on low-income ones. There’s a redistributive effect on taxing differently both goods; it depends on society’s preferences how suitable it is. This source of differentiation often enters in conflict with the previous one.

Taking into account externalities may also be a good reason to make taxes differ depending on the goods. A negative externality like pollution[iii] creates disutilities that are not considered by the consumers when buying the good; taxing it more takes in account the bad effects that the good production/consumption implies. On the other side, taxing relatively less goods that create good externalities seems to be a fair decision: we can be thinking p.e. in aiding firms that are creating synergies[iv], engaging in an active industrial policy, giving scope for future economic growth.

So, the conclusion to my initial question is that it depends. We should analyse carefully how the demand to the goods behaves, who consumes the goods we are taxing and take into account idiosyncratic characteristics of the good that are not perceived by consumers. We should though add that it is not possible to establish the theoretical ideal tax rate “panel” in the real world, due to the bureaucracy that thousands of tax rates will create, the facilitation of tax evasion, etc. 


Samuel Cardoso, 624

Society’s welfare: after all, the purpose of our societies

In today’s world, we tend to believe that having a higher GDP/capita should be the aim of our societies. Ignoring key fragilities in its measurement, what about other things that are not considered in GDP, things that are not valued by markets? Recently, indicators like the HDI try to overcome these handicaps of reducing life to its mercantile side. But what I am intending to do here is to discuss some established ideas about the market-valued part of life.

Analysing the welfare of a society as a whole by analyzing its “economic life” seems to be easy, but it is controversial. A society with the double of the GDP/capita of another one has a higher welfare than this second? It seems logical. But what if in the first society only 1% of the inhabitants get 50% of its created wealth, while in the second one we have approximately an equal (re/)distribution of it? Accordingly to the utilitarian (/Benthamian) social welfare function (SWF), the way wealth is distributed does not matter, but I think that this is an unrealistic view.

Even bearing in mind the consumer theory, models suggest that spending more in a bundle does not necessarily mean a higher utility (if there is no optimization).

What if we keep summing up the utility of all individuals, disregarding their level of income, but assuming that the marginal utility of an extra unity of wealth is lower for a wealthy individual than for a poor one? Assuming this, the way wealth is spread matters. More sophisticated social welfare functions, such as , which depends negatively on the aversion to inequality (defined by , a parameter ranging from 0 to 1), consider inequality is viewed in itself as a bad. An even more radical approach, the Rawlsian social welfare function, stands that social welfare is a function of the poorest individual income (assuming it equal to his/her welfare).

Arrow’s impossibility theorem also gives us some good insights to analyse society’s choices, focusing on elections on a democracy with a rank order voting system. This theorem predicts that such a system is unable to build a social welfare function from voting. Specifically, it states that if voters have more than three options, it is impossible to convert the ranked individuals’ preferences into a social ranking while respecting three criteria: i) no dictators (the outcome must not be completely equal to the ranking of one specific person), ii) Pareto efficiency (taking in account that if voter prefers candidate A relatively to candidate B, the outcome should rank candidate B under candidate A), iii) independence of irrelevant alternatives (corresponding to the need that if voters change the ranking of some candidates the relative rankings among the others should remain unchanged).

So, we can see that defining a social welfare function is not an easy task; moreover, the construction of “the right” function depends on our beliefs and ideas. For those who, like me, think that the Benthamian function is wrong, there are arguments to defend distribution and redistribution policies instead of laisser faire: minimum guaranteed income, minimum wage, subsidies to poor families, etc and simultaneously to tax high incomes, fortunes, etc. It is normal to say taxes distort production and consumer decisions, and so they affect GDP: on the one hand, considering a general equilibrium framework, it may not be the case that GDP falls (e.g. if we assume poor individuals consume a higher percentage of their income, there is a stimulus to the economy by handing them some income, counterbalanced by a reduction in savings than may weaken our economy through a fall in investment); on the other hand, even if production falls, it is possible that the society’s welfare has gone up by increasing less favoured citizens’ income in prejudice of more privileged ones (ignoring production, we will have an increase in social welfare as a result of this policy as long as marginal utility is positive but decreasing on income).

This post tries to show the importance of knowing social welfare functions in order to evaluate a given economic policy. One conclusion that I took from it is that, since it looks almost impossible to know the “true” social welfare function, assuming we do not know it seems to be a good step in order to make right decisions: asking directly to people what are (informing them of the predicted effects of the policies) their preferences seems to be an appropriate procedure.


Samuel Cardoso, 624

Human Capital versus Signalling: Consequences for public policy

           In general, education is seen as a cultural phenomenon, something which is completely necessary nowadays, becoming compulsory in most countries at least in basic level. However, what is the economic objective of education, if any? There must be economic value in education justifying the amount of public and private funds spent on it, especially in advanced levels where able workers sacrifice tons of labor hours.

            One of the two main theories that attempts to explain the value of education is the Human Capital theory: this theory, exposed in more recent times by Mincer and Becker, essentially points out that education builds human capital: by getting education, workers acquire knowledge and skills that make them effectively more productive. Thus, education is seen as similar to investment in other forms of capital, with costs and returns. The only major exception is that, differently from capital, land or labor, human capital is not transferable, i.e., one can sell its machine but one cannot permanently transfer its ability to speak Spanish to another person.

            Nevertheless, there are those who state that it is an incomplete view of the role of education. In 1973, Spence developed a model of education as a signal from potential employees to employers. In a nutshell, what Spence demonstrated is that, even if education has no value for productivity, it serves as a signaling mechanism that differs “good” from “bad” workers. Basically, employers cannot fully distinguish “good” workers from “bad” workers. Yet, getting education is more costly (in terms of opportunity cost) for “bad” workers than for good ones. Thus, even selecting at random from a group of people with education will render a higher probability of selecting a “good” worker than doing the same for the general population. Therefore, education can serve as a mere (although very important) signal.

            The two theories have different consequences: one means an expansion of our capacities (our Edgeworth box) and the other means improvement in information. But how much of education is signaling and how much is building of human capital. The answer might differ with the kind of education. Referring to pre-schooling, many studies conclude that “the return on investment from early childhood development is extraordinary, resulting in better working public schools, more educated workers and less crime”. For college degrees, it seems much more like signaling (at least in most areas): haven’t we all thought that “I am never going to use this knowledge in the workplace”? A wise History professor once told me that he thought that “a PhD is essentially a signalling device that shows whether or not one has an obsession for a certain subject”. Nevertheless, does this mean that the positive externality is smaller for higher levels of education and that public spending should be larger for the first years of learning? It may be so but it also may be that the “matching” service that these later years of education provide is sorting out another market failure, different from positive externality, which is lack of information.

José Cerdeira #628

Giffen Good – is it a dead theory?

In 1895, Alfred Marshall in his Principles of Economics wrote about a very interesting phenomenon related with consumption behaviour of impoverished people near subsistence level of nutrition. He imagined a consumer living on these specific conditions (which actually is the living standard of one billion people all over the world) whose diet was composed by a staple good (bread) and a luxury good (meat). The first good allows consumer to get a high level of calories at low cost while the latest was preferable due to its taste but it was expensive.
A poor consumer would eat a lot of bread and use the remaining income to buy meat.

So far this example has little interest but everything changed when Marshall described what would happen in the case of an increase in the price of bread. The consumer would no longer be capable of purchase the initial bundle; therefore he would even buy more of bread and less of meat since substituting bread by meat would make his caloric intake to fall.  Although this conclusion is far from being difficult to understand, the existence of this type of phenomenon – known as Giffen good – has been intriguing successive generations of economists.

This abnormal upward slope demand curve was over decades on the basis of many empirical research projects that had been inconclusive. Even the very known example of the Irish famine of 1845, when consumption of potato increased after its price rose, was not in fact very credible because the increase in price was caused by the destruction of other remaining crops.

However in 2007 Miller and Jensen, both professors at Harvard University, conducted an experience in rural China which allowed them to provide the first real-world evidence of the existence of the Giffen Good.  ( Using vouchers of staple food as a mean to induce small price variations on extreme poor consumers of rice and wheat they found statistical evidence of an upward slope demand for rice (the good which theory identifies ex-ante as most likely to exhibit that behaviour, especially because it is the cheapest source of calories available).  


In fact if Giffen Good effect is really observable in staple nutrition, policy makers in countries with extreme poverty rates are making a big mistake whenever they launch programs as subsidized food..  As the following figure depicts, the impact of a price reduction may cause a decrease in nutrition level from point A to B (represented by iso-calories lines). ImageThis experiment was drawn in a way that internal and external validity were assured, and given that researchers found robust results, it can constitute an important and essential benchmark of microeconomic policy analysis in what concerns extreme poor communities.

#86, Diogo Matos Mendes

Voucher System: solution or problem?

The issue of education and intervention by the government in education has always been one subject to enormous public debate. One of the policies that have been adopted in order to render more efficiency to the education system is the Education Voucher Scheme. This has been adopted in countries such as Chile and Sweden, and in some states of the United States of America.

The system can have different designs but it generally works as follows: the government will fund equally public and private schools with an amount that would correspond to the average spending per student in each municipality, in a way such that schools are funded proportionally to the number of students that they have. The reasoning behind this relies on two different effects, the so-called school and competition effect.

Firstly, assuming that private schools are in general better and more cost effective, there will be gains in efficiency and quality by the simple fact that more students are able to attend private schools. The second effect, however, can be of much greater importance. The fact that public schools are competing for students with each other and with private schools (and given that public schools do not have “soft” budget constraints and failing schools are shut down), the competition effect will surely increase the quality of both public and private schools.

Nevertheless there might be some issues, especially concerning equity. Cream-skimming can be a problem: the process of cream-skimming, whereby private schools get the better students out of public schools and into private schools might cause a lower increase in the quality of already bad public schools if there is a strong peer effect. Also, in this case, a greater segregation will arise, which will only tend to reinforce such effects. Also, this kind of policy demands enormous autonomy for both public and private schools and only through this autonomy can competition act and give rise to increases in efficiency and innovation. What is more, it also requires a good flow of information regarding quality of schools and adequate choices by the part of parents.

Yet, these problems might be solved through a proper design of the policy. For example, differentiated vouchers that recognize the fact that students with different ability/social background bring different costs could avoid the occurring of cream-skimming. Finally, information problems require that whoever regulates the system makes it mandatory the disclosure of adequate information such as evaluation of grades, and more importantly of value added by the school. In conclusion, this system can be a step towards efficiency. However, if adequate design is not present, it can also be a step away from equity.

José Miguel Cerdeira #628