Category Archives: Investing

Low Volatility ETFs

The hip new financial product fangirled by every personal finance columnist on the internet is the low volatility ETF.  It is pretty much exactly what it sounds like – an ETF that, while tracking whichever index/industry/etc. it is supposed to, attempts to limit the variability of returns.  You can think of it as a stock with a low beta that moves with the trend of the market but not as severely in either direction during business cycle booms and busts.   Methodologies vary, but techniques are employed to limit the variance of individual holdings as well as the correlation between them.  I analyzed the performance of the PowerShares Low Volatility S&P 500 ETF (SPLV) to see how it stacks up against the market as a whole.

Over the past four years, the S&P500 had both a significantly higher maximum and lower minimum return compared to the PowerShares Low Volatility Index.  The S&P experienced many more extreme returns (+/- 1% daily return), suggesting that returns on SPLV fluctuate less than the market.  The S&P also earned a lower average return with higher variance than SPLV.

Period 5/6/11 to 1/6/15

S&P 500 SPLV
Max Daily Return 4.63% 3.75%
Min Daily Return -6.90% -5.18%
Returns less than -1% 98 62
Returns greater than 1% 110 71
Average Daily Return 0.04% 0.06%
Average Annual Return 0.99% 0.75%
Standard Deviation of Daily Return 10.98% 14.03%
Standard Deviation of Annual Return 15.71% 11.85%

The table below is the same analysis for only the year 2014, during which the US equity market posted more gains.

Year 2014

S&P 500 SPLV
Max Daily Return 2.37% 2.00%
Min Daily Return -2.31% -1.99%
Returns less than -1% 19 14
Returns greater than 1% 19 13
Average Daily Return 0.04% 0.06%
Average Annual Return 0.72% 0.60%
Standard Deviation of Daily Return 10.70% 15.80%
Standard Deviation of Annual Return 11.34% 9.55%

The claim that the PowerShares Low Volatility ETF (SPLV) tracks the S&P with less variability in returns  is corroborated by this simple analysis.  The graph of daily close prices and trading volume below also seems to corroborate this – the S&P500 Index (Yellow) fluctuates around the steady-ish path followed by SPLV (Blue).  The ETF misses out on some gains during the summer months, but outperforms later in the year.

Untitled picture2

Interestingly, the fund achieves its low volatility by being overweight in Healthcare and Financials, not the quintessentially low-risk sectors like Telecom or Utilities.

Sector_Breakdown

 


Mortgage Market Update from Calculated Risk

Calculated Risk is a blog that basically aggregates and analyzes up-to-date financial and economic data as it is released, particularly that which applies to the housing market.  The number of economic and financial metrics that are available on the internet is useful in some contexts but often feels more like a confusing, frustrating glut of information that renders answering a pithy question like “What is the rate of foreclosures like in the current housing market relative to pre-crisis times?” difficult to answer.  Trying to get beyond this issue is where I’ve found Calculated Risk really useful – relevant date for a particular issue is laid out, cited, and analyzed clearly in an effective and timely fashion.

I was curious about the housing market after meeting a seemingly overzealous realtor on the train, and here’s what I found via calculated risk.

Delinquencies 

At the end of Q3 2014, the delinquency rate on 1 to 4 unit residential properties was 5.85% of all loans outstanding, down for the 6th consecutive quarter and the lowest rate since the end of 2007.  The delinquency rate does not include loans in foreclosure, though they as well are at their lowest rate since the 4th quarter of ’07 at just under 2.5%.  Though foreclosures have come down from the stratospheric levels reached during their peak in 2010, they’re still more common than they were before the crisis.  Mortgages that are 30 and 60 days past due, on the other hand, have returned to approximately pre-crisis levels.  

Evernote Camera Roll 20141116 042456

Mortgage Rates 

30-year fixed rate mortgage (FRM) rates are down 1 basis point (.01) from last week at 4.01%, roughly the same level as 2011 but lower than last year’s 4.46%.  Obviously there isn’t “one” mortgage rate – the rate we’re talking about here is the one that applies to the most creditworthy borrowers in the best scenario possibly to receive a loan from the bank.  Though all other mortgages are based on this rate, it’s not exactly a rate one should expect to be offered by a bank.

Evernote Camera Roll 20141116 043840

The relatively small difference between a mortgage quoted at 4.01% and 4.45% has a surprisingly large financial impact on the 30 year FRM.  A $250,000, 30-yr. FRM at a 4.01% nominal annual rate compounded monthly (as is typically the case) necessitates a monthly payment of $1,194.98, whereas the same mortgage at 4.45% would require a monthly payment of $1,259.30.  With the higher payment, the borrower pays an additional $23,155 in interest over the term of the mortgage.

Another post talks about subdued refinancing activity, which I’d guess is the result of relatively static mortgage rates as it’s generally only financially viable to refinance when rates have changed significantly.  Banks could also be offering fewer refinancing options after the crisis, a reasonable assumption given their cautious resumption of lending post-crisis and the role that refinancing options played in exacerbating the housing bubble.  I’m purely speculating, though, and I’ll look into this more later.

Residential Prices

A widespread slowdown in the rate of housing price increases has been steadily taking hold since February of this year.  Residential prices aren’t decreasing, they’re just rising at a slower and slower rate each month, and now sit 20% below their 2006 peak.  This is not to say we should expect or even wish that housing prices should resume at 2006 levels, as such was clearly unsustainable – furthermore, though slow relative to preceding months, the (annualized) 6%+ experienced last month is still pretty strong and obviously outpaces inflation.

 

Evernote Camera Roll 20141116 050031


Level Payment vs. Sinking Fund Loans

Below is a document explaining how to derive formulas for the most basic level payment and sinking fund loans. This is a simple introduction, as I’m currently working on a more detailed analysis of the benefits/drawbacks to various types of loans (including installment, variable rate, etc.) using empirical data and considering various scenarios, like the option to refinance and varying interest rates.  I used the results from my post on annuity formulas to simplify the derivation, so if you’re confused about how I got from one step to the next, check there!

Level Payment and Sinking Fund Loans


Deriving the Present Value and Future Value of an Annuity Immediate

Below is the derivation of the present and future value of a unit annuity immediate, or a series of $1 cash flows that occur at equal intervals of time at the end of each period.  I originally wrote this document as a review for myself in preparation for actuary exam FM/2.  The majority of questions on the exam, despite the wide array of topics covered, come down to solving for the value of some annuity.  Granted, it likely won’t be a case as simple as the one below, but many problems about loans, bonds, yield rates, and even financial derivatives biol down to an annuity problem.

Annuity_Derivation


A Post on Measuring Historical Volatility

I’ve reblogged a concise yet thorough explanation of the calculation of market volatility. The post makes very clear how input parameters (weighting, time frame, etc.) affect its validity as an estimate of future market movements (link).  The phrase “Fat Tails” is often thrown around like a meaningless buzzword in financial media (Squawk Box, for example), but the concept is explained intuitively here. In a separate post, market data from the S&P500 is used to demonstrate the decay factor’s effect on log returns (link).

 

mathbabe

Say we are trying to estimate risk on a stock or a portfolio of stocks. For the purpose of this discussion, let’s say we’d like to know how far up or down we might expect to see a price move in one day.

First we need to decide how to measure the upness or downness of the prices as they vary from day to day. In other words we need to define a return. For most people this would naturally be defined as a percentage return, which is given by the formula:

$latex (p_t – p_{t-1})/p_{t-1},$

where $latex p_t$ refers to the price on day $latex t$. However, there are good reasons to define a return slightly differently, namely as a log return:

$latex mbox{log}(p_t/p_{t-1})$

If you know your power series expansions, you will quickly realize there is not much difference between these two definitions for small returns- it’s only…

View original post 807 more words


Is the Stock Market a Viable Barometer of Economic Health?

The S&P’s record close of 1992.37 on Thursday begs the following question: what, if anything, does a soaring stock market index, up almost 8% just this year, say about the health of the real economy?  As I’ve mentioned previously, there are quite a few issues in the current U.S. economy that may have to be rectified before the real economy can sustain robust growth – a weak labor force and stagnant wage growth, for example.  If we are to interpret the appreciation in the price of a stock market index as a sign of economic health, as many pundits on TV seem to do, then Thursday’s record close seems to contradict what the assertion that wage growth and a robust labor force are vital to the U.S. economy’s health.  This subject is briefly addressed on page 101 of  Freefall by economist Joseph Stiglitz, an account of the financial crisis, its causes, and aftermath.  He says:

“Unfortunately, an increase in stock market prices may not necessarily indicate that all is well.  Stock market prices may rise because the Fed is flooding the world with liquidity, and interest rates are low, so stocks look much better than bonds.  The flood of liquidity coming from the Fed will find some outlet, hopefully leading to more lending to businesses, but it could also result in a mini-asset price or stock market bubble.  Or rising stock market prices may reflect the success of firms in cutting costs – firing workers and lowering wages.  If so, it’s a harbinger of problems for the overall economy.  If workers’ incomes remain weak, so will consumption, which accounts for 70 percent of GDP.” 

I quoted the preceding passage because it cogently argues that stock market gains are not necessarily emblematic of health in the economy, as the media – particularly on business-oriented news shows – often suggest.  The two scenarios Stiglitz mentions (expansionary monetary policy and firms cutting costs) result in higher stock prices but not a healthier economy.  It is erroneous to conclude that the price of the S&P 500 is a sufficient and reliable barometer of economic health.


Econ Week in Review: 6/9 – 6/15

Emerging markets have lost momentum throughout the past year, as investors adjust to the changing macroeconomic climate. According to news outlets (MarketWatchBloombergLibertyStreet) global risk aversion is to blame.  Though emerging economies are still poised for more GDP growth than their developed counterparts in 2014 – 2.2% versus 4.9% – they aren’t expected to increase that growth rate by much in ’15 and beyond.

Growth prospects that are relatively weak in comparison to previous estimates (though still strong in comparison to other economies) is one reason why investors are pulling out of emerging market economies, as evidenced by capital outflows in those markets (IMF).  Another is the outlook for developed economies, particularly the United States, which looks much better than it did a year ago.  Now that investors expect the U.S. economy to recover and interest rates move away from the zero lower bound, they expect new financial opportunities to emerge in developed economies as well.  Now that investors think they’ll be able to make a decent return in an advanced economy, there’s less of an incentive to take on the risk associated with emerging markets.  There is evidence of this if you look at capital outflows immediately following Ben Bernanke’s speech in May 2013; it seems that the cautiously positive economic outlook Bernanke conveyed in his speech led to a sell off in emerging markets that has continued for the past year.  Some economists have justified this phenomenon using the VIX as an indicator of risk aversion

Capital outflows put emerging market economies in a tight credit position, constraining their growth potential.  Previously they had enjoyed abundant credit because investors in advanced economies had to go abroad for financial returns.  Furthermore, it has been shown that Quantitative Easing and related policies in the U.S. put downward pressure on interest rates in emerging markets, facilitating even easier borrowing.  This shouldn’t come as a surprise since financial markets are becoming increasingly integrated, but it could pose some problems by limiting the effectiveness of domestic monetary policy (full discussion on if/how us policy affects global market here).  Integrated financial markets are generally a good thing, though , and the fragmentation that currently plagues most of Europe is a pertinent example of that.

I’m going to write a follow-up on QE and the role of expectations in the next week or so (after I finish reading the IMFs global economic outlook), and hopefully delve deeper into the current situation in emerging market economies.


The Real Pension Crisis

As the term “Pension Reform” becomes a media buzzword, politicians on both ends of the political spectrum insist that reforming the U.S. public pension system is at the top of their gubernatorial to-do lists. The rhetoric centers around two basic issues: underfunding as a result of insufficient contributions made by local and state governments, and contractual details such as cost of living adjustments, retirement age, and employee contributions. It’s not news that U.S. public pensions are severely underfunded – dismal coverage ratios, particularly in California and Illinois, have been published in recent years. One way to rectify the deficit in pension funds is to cut benefits by changing the contractual details mentioned previously; for example, increasing the retirement age or decreasing cost of living adjustments. Doing this is politically unpopular, however, and as such politicians are generally wary of making any significant structural changes to public pensions out of fear of retribution at the ballot box. Underfunding and the risk it poses for pension benefits is certainly an issue that warrants the public’s attention and outrage, but it is not the biggest issue that plagues the U.S. public pension system. The bigger and lesser-known issue is that the reported state of the system, although dire, severely understates the true degree of underfunding.

Though it has been established publicly that liabilities invariably exceed assets in the U.S. public pension system, a bigger issue is that virtually all of the liabilities are also severely underestimated and will continue to be valued as such until structural reform. Under GASB requirements, U.S. public pension funds can set their own discount rate and asset allocation almost arbitrarily. The rules link liability discount rates to the expected return on assets – that is, the riskiness of assets – and present a clear incentive to invest in riskier assets so as to maintain high liability discount rates. Higher liability discount rates ensure that liabilities are lower on paper, since they are discounted by a higher factor. Thus, politicians can contribute less money in real terms to pension funds and maintain a favorable funding position without raising any more revenue or changing the benefit formula. Of course, the true funding situation hasn’t changed, only the number that is reported to the public.

Given the state of GASB requirements, the facts reveal a clear conflict of interest: stakeholders have a direct incentive to choose a higher discount rate in order to disguise the degree to which plans are underfunded. In terms of pension benefits, the result is that the current value of pension benefits is underestimated and does not reflect what the system will actually have to pay when they come due. In terms of asset allocation, GASB gives stakeholders an incentive to shift towards riskier assets so as to justify smaller contribution levels. Investing in riskier assets has given U.S. public pension funds the ability to maintain high discount rates and present lower liability valuations, both of which correspond to less money that needs to be set today to meet those obligations in the future. This ‘fix’ allows a politician to remedy the pervasive lack of pension funding without allocating another dollar from the state budget. Therefore, no difficult budgetary decisions have to be made and no voters will be upset in the short term. The task of making responsible budgetary decisions is transferred to future generations, the state can report a higher pension coverage ratio without (technically) lying, and there is more money in the budget for politicians to buy themselves yachts and fur coats.

The preceding information suggests that discount rates are chosen by U.S. public pensions on the basis of political, as opposed to financial, criteria. This suspicion is corroborated in “Pension Fund Asset Allocation and Liability Discount Rates” by Andonov, Bauer, and Cremers, a cross-sectional study of over 800 defined benefit pension funds in three different countries over the course of 20 years, among other studies (see Sielman, 2013, Brown and Wilcox, 2009, and Novy-Marx and Rauh, 2009). In the private sector, liability discount rates decreased over the period, reaching 5.7% in 2010 along a path that closely mirrored the ten-year treasury yield. Private funds retained high rates of 7.5 – 8% throughout the period, despite a systemic decline in interest rates. The only way to maintain exorbitant rates of return when low risk rates such as treasury bills decline is to allocate a larger portion of the fund to riskier assets, a conclusion that Andonov, Bauer, and Cremers confirm in their empirical study. Private funds shifted 20% more to risky assets in 2010 than they did 20 years earlier, whereas public fund asset allocation remained virtually unchanged. The paper also finds that larger funds allocate proportionally more to risky investments. In short, U.S. public pension funds respond to decreasing treasury yields recklessly, by arbitrarily manipulating asset allocation so as to incur enough risk that a high discount rate can be used to meet obligations without actually allocating any more money. Of course, it doesn’t take a financier to see that this method is little more than an accounting trick to appease public employee unions and state auditors simultaneously.

An increasingly popular trend in U.S. public pension fund management is the smoothing of asset returns. The smoothing process essentially manipulates the time frame over which a return is observed in a way that makes returns look less volatile. Asset smoothing is not always a shady manipulation tactic; in circumstances where the measurement period is small, speculators and short-term investors may influence the variance of the sample. In the case where this influence is thought to be so severe that it biases the variance of returns, it may be beneficial to smooth returns over a longer period and thus dampen the effects of short-term volatility. There’s considerable discretion involved in evaluating the validity of this method, and as such it has become a way to disguise the volatility of risky assets in U.S. public pension funds. Congress recently decided that smoothing corporate bonds over a 25-year period was appropriate for U.S. pension funds, thus replacing the previous upper limit of 2 years. It remains to be seen, however, if one single person, whether a congressman, finance professional, or anyone else for that matter, can make the argument that 1988 interest rates are of any significance when discounting future pension liabilities.

For two reasons, U.S. public pension funds are unique in their propensity to mask the financial position of their investments. First, a private firm’s desire for high returns is accompanied by risk. The cost of financial distress for a private firm is an incentive to fund pensions fully, as underfunding could result in enormous payments, required by law, at the time benefits come due. Private firms do not have a tax base to exploit for these payments. Additionally, firms that are overfunded are exempt from PBGC insurance premiums, which lower the true cost of funding pension obligations. While private pensions must choose a discount rate as a function of current interest rates, U.S. public pensions are held to no regulatory standard when determining discount rates. Numerous studies have shown that there is no statistically significant association between liability discount rates and interest rates for U.S. public pensions. On the other hand, the private sector tends to decrease liability discount rates when interest rates decrease, thus accurately adjusting their expected earnings and determining contribution levels accordingly. On average, the discount rate used by public funds is 190 basis points higher than its private sector counterpart.

Robert Novy-Marx and Joshue D. Rauh use discount rates that reflect the risk level associated with pensions to calculate the true present value of pension liabilities that have already been promised by the United States, and their results show the gravity of the GASB shortcomings. In 2007, the total liabilities stated in annual reports for the 116 largest U.S. Public Pension plans were $2.81 trillion, compared to $1.94 trillion in assets. As mentioned earlier, the public outcry would be justified even if this were the true state of underfunding. In discounting liabilities with an appropriate discount rate, however, the authors conclude that the present value of the pension liabilities is $5.17 trillion, which corresponds to a $3.23 trillion deficit. To reiterate, facilitating this cover-up are GASB accounting standards that promote flagrant conflicts of interest and opaque public finances. The result is an issue that is much more serious than the so-called “Pension Crisis” that is reported in the media.

 


Modeling Stock Market Behavior

In the finance world, there’s some debate about whether or not the daily closing prices for various stock market indices convey  useful information.  Some financiers subscribe to the belief that the daily close price reflects market trends and impacts the probability of realizing a good return.  Others disagree, claiming that the day-to-day movements in the stock market are completely random and convey no useful information.  If this is true, then the closing price changes in the stock market should mirror a geometric random variable.  In this post I’ll explain why the geometric model would imply that stock market fluctuations are random and then test the validity of the model empirically.

Suppose the outcome of some even is binary, and success occurs with probability p.   Obviously failure must occur with probability 1 – p.  A geometric random variable models the number of trials that take place before the first success.  It takes on the value k when the first success occurs on the kth trial.  Trials are assumed to be independent, so we can write the probability density function of the random variable X as follows:

Screen shot 2014-02-07 at 9.09.52 AM

We used the independence assumption to rewrite the event “k-1 failures and a success on trial k” as the product of two distinct groups of events, namely k -1 failures and then 1 success.  Now we use the fact that success occurs with probability p (and the independence assumption, again) to write the following:

Screen shot 2014-02-07 at 9.10.00 AM

To model the behavior of the stock market as a geometric random variable, assume that on day 1 the market has fallen from the previous day.  We’ll call this fall in the closing price a “failure” that occurs with probability 1 – p.  Let the geometric random variable X represent the number of subsequent failures that occur until the stock market rises (“success”).  For example, if on the second day the stock market rises, the random variable X takes on the value 1, because there was only one decline (failure) until the rise (success) that occurred on the second day.  Similarly, if the market declines on days 2, 3, and 4 and rises on day 5, then it has declined on four occasions before rising on the fifth day and thus the random variable X takes on the value 4.  Keep in mind that it is stipulated in the formulation of the random variable that the market declined on day one, and therefore a fall on days 2, 3, and 4 is a sequence of four failures, not three.

To determine whether a geometric model fits the daily behavior of the stock market, we have to estimate the parameter p.  In our model, we are addressing the question of whether stock market price fluctuations are geometric.  Geometric random variables can take on infinitely many values of p (so long as p is between 0 and 1), so our model doesn’t address the probability with which the stock market rises and falls; the geometric model addresses the behavior of the random variable for a given p.  The value p takes on may be of interest in formulating other questions, but here its job is to create a realistic geometric model that we can compare to empirical stock market data.  If the stock market data fits the geometric model, the implication is that stock markets tend to rise and fall randomly with a constant probability of success.  This suggests that daily stock market quotes are meaningless in that today’s price does not reflect historical prices.  One could say that if this model fits stock markets don’t “remember” yesterday, but that sounds a lot like something called the memoryless property, an important characteristic of the exponential distribution, so we should be careful to not confuse the two.

Once we get some empirical data, we’re going to estimate the probability of success p.  So let’s solve for the general case and then compute an actual value with data afterwards.  There is no one way to estimate the value of a parameter, but one good way to do so is to use the maximum likelihood estimator of the parameter.  The idea is simple, but sometimes computationally difficult.  To estimate the value of p with the maximum likelihood estimator, we find the value of p for which the observed sample is mostly likely to have occurred.  We are basically maximizing the “likelihood” that the sample data comes from a distribution with parameter p.   To do this, we take the likelihood function, which is the product of the probability density function of the underlying distribution evaluated at each sample value:

Screen shot 2014-02-07 at 9.11.29 AM

For our model, we just need to substitute in the pdf of a geometric random variable for the generic pdf above and replace theta with p, the probability of success:

Screen shot 2014-02-07 at 9.11.35 AM

To find the maximum likelihood estimate for p, we maximize the likelihood function with respect to p.  That is, we take its partial derivative with respect to p and set it equal to 0.  However, it’s computationally simpler to work with the natural logarithm of the likelihood function.  This won’t affect the value of p that maximizes L(p), since the natural logarithm of L(p) is a positive, increasing function of L(p).  Sometimes you’ll hear of “Log-likelihood functions”, and this is precisely what they are  – just the log of a likelihood function that facilitates easier calculus.

Screen shot 2014-02-07 at 9.12.56 AM

Taking the derivative of this function is a lot easier than the likelihood function we had before:

Screen shot 2014-02-07 at 9.13.05 AM

So our maximum likelihood estimate of p (the probability of success) is one divided by the sample average, or, equivalently, n divided by the sum of all the k values in our sample.  This gives us the value of p that is most consistent with the n observations k1, …, k.  Below is a table of k values derived from closing data for the Dow Jones over the course of the year 2006-2007.

Recall that the random variable X takes on the value K when K – 1 failures occur (market price decreases) before a success (price increase) on trial k.  For example, X takes on the value k = 1 72 times in our dataset, which means that on 72 occasions over the course of the year there was only one failure before the first success; that is, the market declined on day 1 (by definition) and subsequently rose on day 2.  Similarly, there were 35 occasions where two failures were realized before a success, because the random variable X took on the value k = 2 on 35 occasions.

K Observed Freq.

1

72

2

35

3

11

4

6
5

2

6

2

We now have the necessary data to compute p.  We have 128 observations (values of k), so n = 128.  There are two ways we can compute p.  First, we could take the sample mean of the dataset how we normally would for a discrete random variable and then utilize formula 1 above:

Screen shot 2014-02-07 at 9.14.43 AM

The second formula obviously yields the same result, as you directly compute 128/221 instead of first computing its reciprocal.  So we now have a maximum likelihood estimate for the parameter p.  We can use this to model the stock price movement as a geometric random variable.  First let’s make the assumption that the stock market can in fact be modeled this way.  Given our value of p, what would we expect for the values of k?  that is, what proportion or with what frequency do we expect X to take on the values k = 1, 2, … ? First we’ll compute this, and then compare to the empirical data.

Screen shot 2014-02-07 at 9.15.27 AM

The probability that X takes on the value one is equal to the probability of success, which is to be expected since X = 1 corresponds to the situation in which success is realized on the day immediately following the initial failure.

Screen shot 2014-02-07 at 9.16.03 AM

And the rest are computed the same way.  Now since we have 128 observations, we can multiply each expected percentage by the number of observations to come up with an expected frequency.  Then, we can compare these to the observed frequencies and judge how well the model fits.

K N Expected % Expected Frequency

1

128

.5792

74.14

2

128

.2437

31.19

3

128

.1027

13.13

4

128

.0432

5.52

5 128

.0182

2.32

6 128

.0132

1.69

Now that we know what we should expect if the geometric model is a valid representation of the stock market, let’s compare the expected frequencies to the observed frequencies:

Expected Frequency Observed Frequency

74.14

72

31.19

35

13.13

11

5.52

6

2.32

2

1.69

2 

The geometric model appears to be a very good fit, which suggests that daily fluctuations in stock market prices are random.  Furthermore, stock indices don’t ‘remember’ yesterday – the probability of the market rising or falling is constant, and whether it actually rises or falls on a given day is subject to random chance.


Visualizing the Duration of Assets

This is just a simple example comparing the durations of two different assets.  Investment 1 pays $1000 in year 1, $2000 in year 2, and $3000 in year 3; Investment 2 pays $3000 in year 1, $2000 in year 2, and $1000 in year 3.  At an interest rate of 8%, investment 1 is worth $5022.10 and investment 2 is worth $5286.29.

Obviously asset 1 is more risky because its larger payments come later.  This means that the bulk of its worth is subject to more interest rate fluctuation.  We would expect it to have a higher Duration than asset 2.  This is easy to verify; The Duration of Asset 1 is 2.28983 (DM = 2.12) and the Duration of Asset 2 is 1.6247 (DM = 1.50435).  Therefore, we’d expect asset 1 to be disproportionately affected by interest rate fluctuations, which is evident from the graph below.

DurationPic2

The PV of the assets is almost the same at low interest rates, and investment 1 decreases in value more than investment 2 does when interest rates change.

 


www.openeuroscience.com/

Open source projects for neuroscience!

Systematic Investor

Systematic Investor Blog

Introduction to Data Science, Columbia University

Blog to document and reflect on Columbia Data Science Class

Heuristic Andrew

Good-enough solutions for an imperfect world

r4stats.com

"History doesn't repeat itself but it does rhyme"

My Blog

take a minute, have a seat, look around

Data Until I Die!

Data for Life :)

R Statistics and Programming

Resources and Information About R Statistics and Programming

Models are illuminating and wrong

A data scientist discussing his journey in the analytics profession

Xi'an's Og

an attempt at bloggin, nothing more...

Practical Vision Science

Vision science, open science and data analysis

Big Data Econometrics

Small posts about Big Data.

Simon Ouderkirk

Remote Work, Small Data, Digital Hospitality. Work from home, see the world.

rbresearch

Quantitative research, trading strategy ideas, and backtesting for the FX and equity markets

Statisfaction

I can't get no

The Optimal Casserole

No Line Is Ever Pointless

SOA Exam P / CAS Exam 1

Preparing for Exam P / Exam 1 thru Problem Solving

schapshow

Mathematical statistics for the layman.