Tag Archives: models

Review – Quantitative Value (#valueinvesting, #quant, @greenbackd, @turnkeyanalyst)

Quantitative Value: A Practitioner’s Guide to Automating Intelligent Investment and Eliminating Behavioral Errors + website (buy on Amazon.com)

by Wesley R. Gray, PhD & Tobias E. Carlisle, LLB, published 2012

A “valueprax” review always serves two purposes: to inform the reader, and to remind the writer. Find more reviews by visiting the Virtual Library. Please note, I received a copy of this book for review from the publisher, Wiley Finance, on a complimentary basis.

The root of all investors’ problems

In 2005, renowned value investing guru Joel Greenblatt published a book that explained his Magic Formula stock investing program– rank the universe of stocks by price and quality, then buy a basket of companies that performed best according to the equally-weighted measures. The Magic Formula promised big profits with minimal effort and even less brain damage.

But few individual investors were able to replicate Greenblatt’s success when applying the formula themselves. Why?

By now it’s an old story to anyone in the value community, but the lesson learned is that the formula provided a ceiling to potential performance and attempts by individual investors to improve upon the model’s picks actually ended up detracting from that performance, not adding to it. There was nothing wrong with the model, but there was a lot wrong with the people using it because they were humans prone to behavioral errors caused by their individual psychological profiles.

Or so Greenblatt said.

Building from a strong foundation, but writing another chapter

On its face, “Quantitative Value” by Gray and Carlisle is simply building off the work of Greenblatt. But Greenblatt was building off of Buffett, and Buffett and Greenblatt were building off of Graham. Along with integral concepts like margin of safety, intrinsic value and the Mr. Market-metaphor, the reigning thesis of Graham’s classic handbook, The Intelligent Investor, was that at the end of the day, every investor is their own worst enemy and it is only by focusing on our habit to err on a psychological level that we have any hope of beating the market (and not losing our capital along the way), for the market is nothing more than the aggregate total of all psychological failings of the public.

It is in this sense that the authors describe their use of “quantitative” as,

the antidote to behavioral error

That is, rather than being a term that symbolizes mathematical discipline and technical rigor and computer circuits churning through financial probabilities,

It’s active value investing performed systematically.

The reason the authors are beholden to a quantitative, model-based approach is because they see it as a reliable way to overcome the foibles of individual psychology and fully capture the value premium available in the market. Success in value investing is process-driven, so the two necessary components of a successful investment program based on value investing principles are 1) choosing a sound process for identifying investment opportunities and 2) consistently investing in those opportunities when they present themselves. Investors cost themselves precious basis points every year when they systematically avoid profitable opportunities due to behavioral errors.

But the authors are being modest because that’s only 50% of the story. The other half of the story is their search for a rigorous, empirically back-tested improvement to the Greenblattian Magic Formula approach. The book shines in a lot of ways but this search for the Holy Grail of Value particularly stands out, not just because they seem to have found it, but because all of the things they (and the reader) learn along the way are so damn interesting.

A sampling of biases

Leaning heavily on the research of Kahneman and Tversky, Quantitative Value offers a smorgasbord of delectable cognitive biases to choose from:

  • overconfidence, placing more trust in our judgment than is due given the facts
  • self-attribution bias, tendency to credit success to skill, and failure to luck
  • hindsight bias, belief in ability to predict an event that has already occurred (leads to assumption that if we accurately predicted the past, we can accurately predict the future)
  • neglect of the base case and the representativeness heuristic, ignoring the dependent probability of an event by focusing on the extent to which one possible event represents another
  • availability bias, heavier weighting on information that is easier to recall
  • anchoring and adjustment biases, relying too heavily on one piece of information against all others; allowing the starting point to strongly influence a decision at the expense of information gained later on

The authors stress, with numerous examples, the idea that value investors suffer from these biases much like anyone else. Following a quantitative value model is akin to playing a game like poker systematically and probabilistically,

The power of quantitative investing is in its relentless exploitation of edges

Good poker players make their money by refusing to make expensive mistakes by playing pots where the odds are against them, and shoving their chips in gleefully when they have the best of it. QV offers the same opportunity to value investors, a way to resist the temptation to make costly mistakes and ensure your chips are in the pot when you have winning percentages on your side.

A model development

Gray and Carlisle declare that Greenblatt’s Magic Formula was a starting point for their journey to find the best quantitative value approach. However,

Even with a great deal of data torture, we have not been able to replicate Greenblatt’s extraordinary results

Given the thoroughness of their data collection and back-testing elaborated upon in future chapters, this finding is surprising and perhaps distressing for advocates of the MF approach. Nonetheless, the authors don’t let that frustrate them too much and push on ahead to find a superior alternative.

They begin their search with an “academic” approach to quantitative value, “Quality and Price”, defined as:

Quality, Gross Profitability to Total Assets = (Revenue – Cost of Goods Sold) / Total Assets

Price, Book Value-to-Market Capitalization = Book Value / Market Price

The reasons for choosing GPA as a quality measure are:

  • gross profit measures economic profitability independently of direct management decisions
  • gross profit is capital structure neutral
  • total assets are capital structure neutral (consistent w/ the numerator)
  • gross profit better predicts future stock returns and long-run growth in earnings and FCF

Book value-to-market is chosen because:

  • it more closely resembles the MF convention of EBIT/TEV
  • book value is more stable over time than earnings or cash flow

The results of the backtested horserace between the Magic Formula and the academic Quality and Price from 1964 to 2011 was that Quality and Price beat the Magic Formula with CAGR of 15.31% versus 12.79%, respectively.

But Quality and Price is crude. Could there be a better way, still?

Marginal improvements: avoiding permanent loss of capital

To construct a reliable quantitative model, one of the first steps is “cleaning” the data of the universe being examined by removing companies which pose a significant risk of permanent loss of capital because of signs of financial statement manipulation, fraud or a high probability of financial distress or bankruptcy.

The authors suggest that one tool for signaling earnings manipulation is scaled total accruals (STA):

STA = (Net Income – Cash Flow from Operations) / Total Assets

Another measure the authors recommend using is scaled net operating assets (SNOA):

SNOA = (Operating Assets – Operating Liabilities) / Total Assets

Where,

OA = total assets – cash and equivalents

OL = total assets – ST debt – LT debt – minority interest – preferred stock – book common equity

They stress,

STA and SNOA are not measures of quality… [they] act as gatekeepers. They keep us from investing in stocks that appear to be high quality

They also delve into a number of other metrics for measuring or anticipating risk of financial distress or bankruptcy, including a metric called “PROBMs” and the Altman Z-Score, which the authors have modified to create an improved version of in their minds.

Quest for quality

With the risk of permanent loss of capital due to business failure or fraud out of the way, the next step in the Quantitative Value model is finding ways to measure business quality.

The authors spend a good amount of time exploring various measures of business quality, including Warren Buffett’s favorites, Greenblatt’s favorites and those used in the Magic Formula and a number of other alternatives including proprietary measurements such as the FS_SCORE. But I won’t bother going on about that because buried within this section is a caveat that foreshadows a startling conclusion to be reached later on in the book:

Any sample of high-return stocks will contain a few stocks with genuine franchises but consist mostly of stocks at the peak of their business cycle… mean reversion is faster when it is further from its mean

More on that in a moment, but first, every value investor’s favorite subject– low, low prices!

Multiple bargains

Gray and Carlisle pit several popular price measurements against each other and then run backtests to determine the winner:

  • Earnings Yield = Earnings / Market Cap
  • Enterprise Yield(1) = EBITDA / TEV
  • Enterprise Yield(2) = EBIT / TEV
  • Free Cash Flow Yield = FCF / TEV
  • Gross Profits Yield = GP / TEV
  • Book-to-Market = Common + Preferred BV / Market Cap
  • Forward Earnings Estimate = FE / Market Cap

The result:

the simplest form of the enterprise multiple (the EBIT variation) is superior to alternative price ratios

with a CAGR of 14.55%/yr from 1964-2011, with the Forward Earnings Estimate performing worst at an 8.63%/yr CAGR.

Significant additional backtesting and measurement using Sharpe and Sortino ratios lead to another conclusion, that being,

the enterprise multiple (EBIT variation) metric offers the best risk/reward ratio

It also captures the largest value premium spread between glamour and value stocks. And even in a series of tests using normalized earnings figures and composite ratios,

we found the EBIT enterprise multiple comes out on top, particularly after we adjust for complexity and implementation difficulties… a better compound annual growth rate, higher risk-adjusted values for Sharpe and Sortino, and the lowest drawdown of all measures analyzed

meaning that a simple enterprise multiple based on nothing more than the last twelve months of data shines compared to numerous and complex price multiple alternatives.

But wait, there’s more!

The QV authors also test insider and short seller signals and find that,

trading on opportunistic insider buys and sells generates around 8 percent market-beating return per year. Trading on routine insider buys and sells generates no additional return

and,

short money is smart money… short sellers are able to identify overvalued stocks to sell and also seem adept at avoiding undervalued stocks, which is useful information for the investor seeking to take a long position… value investors will find it worthwhile to examine short interest when analyzing potential long investments

This book is filled with interesting micro-study nuggets like this. This is just one of many I chose to mention because I found it particularly relevant and interesting to me. More await for the patient reader of the whole book.

Big and simple

In the spirit of Pareto’s principle (or the 80/20 rule), the author’s of QV exhort their readers to avoid the temptation to collect excess information when focusing on only the most important data can capture a substantial part of the total available return:

Collecting more and more information about a stock will not improve the accuracy of our decision to buy or not as much as it will increase our confidence about the decision… keep the strategy austere

In illustrating their point, they recount a funny experiment conducted by Paul Watzlawick in which two subjects oblivious of one another are asked to make rules for distinguishing between certain conditions of an object under study. What the participants don’t realize is that one individual (A) is given accurate feedback on the accuracy of his rule-making while the other (B) is fed feedback based on the decisions of the hidden other, invariably leading to confusion and distress. B comes up with a complex, twisted rationalization for his  decision-making rules (which are highly inaccurate) whereas A, who was in touch with reality, provides a simple, concrete explanation of his process. However, it is A who is ultimately impressed and influenced by the apparent sophistication of B’s thought process and he ultimately adopts it only to see his own accuracy plummet.

The lesson is that we do better with simple rules which are better suited to navigating reality, but we prefer complexity. As an advocate of Austrian economics (author Carlisle is also a fan), I saw it as a wink and a nod toward why it is that Keynesianism has come to dominate the intellectual climate of the academic and political worlds despite it’s poor predictive ability and ferociously arbitrary complexity compared to the “simplistic” Austrian alternative theory.

But I digress.

Focusing on the simple and most effective rules is not just a big idea, it’s a big bombshell. The reason this is so is because the author’s found that,

the Magic Formula underperformed its price metric, the EBIT enterprise multiple… ROC actually detracts from the Magic Formula’s performance [emphasis added]

Have I got your attention now?

The trouble is that the Magic Formula equally weights price and quality, when the reality is that a simple price metric like buying at high enterprise value yields (that is, at low enterprise value multiples) is much more responsible for subsequent outperformance than the quality of the enterprise being purchased. Or, as the authors put it,

the quality measures don’t warrant as much weight as the price ratio because they are ephemeral. Why pay up for something that’s just about to evaporate back to the mean? [...] the Magic Formula systematically overpays for high-quality firms… an EBIT/TEV yield of 10 percent or lower [is considered to be the event horizon for "glamour"]… glamour inexorably leads to poor performance

All else being equal, quality is a desirable thing to have… but not at the expense of a low price.

The Joe the Plumbers of the value world

The Quantitative Value strategy is impressive. According to the authors, it is good for between 6-8% a year in alpha, or market outperformance, over a long period of time. Unfortunately, it is also, despite the emphasis on simplistic models versus unwarranted complexity, a highly technical approach which is best suited for the big guys in fancy suits with pricey data sources as far as wholesale implementation is concerned.

So yes, they’ve built a better mousetrap (compared to the Magic Formula, at least), but what are the masses of more modest mice to do?

I think a cheap, simplified Everyday Quantitative Value approach process might look something like this:

  1. Screen for ease of liquidity (say, $1B market cap minimum)
  2. Rank the universe of stocks by price according to the powerful EBIT/TEV yield (could screen for a minimum hurdle rate, 15%+)
  3. Run quantitative measurements and qualitative evaluations on the resulting list to root out obvious signals to protect against risk of permanent loss by eliminating earnings manipulators, fraud and financial distress
  4. Buy a basket of the top 25-30 results for diversification purposes
  5. Sell and reload annually

I wouldn’t even bother trying to qualitatively assess the results of such a model because I think that runs the immediate and dangerous risk which the authors strongly warn against of our propensity to systematically detract from the performance ceiling of the model by injecting our own bias and behavioral errors into the decision-making process.

Other notes and unanswered questions

“Quantitative Value” is filled with shocking stuff. In clarifying that the performance of their backtests is dependent upon particular market conditions and political history unique to the United States from 1964-2011, the authors make reference to

how lucky the amazing performance of the U.S. equity markets has truly been… the performance of the U.S. stock market has been the exception, not the rule

They attach a chart which shows the U.S. equity markets leading a cohort of long-lived, high-return equity markets including Sweden, Switzerland, Canada, Norway and Chile. Japan, a long-lived equity market in its own right, has offered a negative annual return over its lifetime. And the PIIGS and BRICs are consistent as a group in being some of the shortest-lifespan, lowest-performing (many net negative real returns since inception) equity markets measured in the study. It’s also fascinating to see that the US, Canada, the UK, Germany, the Netherlands, France, Belgium, Japan and Spain all had exchanges established approximately at the same time– how and why did this uniform development occur in these particular countries?

Another fascinating item was Table 12.6, displaying “Selected Quantitative Value Portfolio Holdings” of the top 5 ranked QV holdings for each year from 1974 through 2011. The trend in EBIT/TEV yields over time was noticeably downward, market capitalization rates trended upward and numerous names were also Warren Buffett/Berkshire Hathaway picks or were connected to other well-known value investors of the era.

The authors themselves emphasized that,

the strategy favors large, well-known stocks primed for market-beating performance… [including] well-known, household names, selected at bargain basement prices

Additionally, in a comparison dated 1991-2011, the QV strategy compared favorably in a number of important metrics and was superior in terms of CAGR with vaunted value funds such as Sequoia, Legg Mason and Third Avenue.

After finishing the book, I also had a number of questions that I didn’t see addressed specifically in the text, but which hopefully the authors will elaborate upon on their blogs or in future editions, such as:

  1. Are there any reasons why QV would not work in other countries besides the US?
  2. What could make QV stop working in the US?
  3. How would QV be impacted if using lower market cap/TEV hurdles?
  4. Is there a market cap/TEV “sweet spot” for the QV strategy according to backtests? (the authors probably avoided addressing this because they emphasize their desire to not massage the data or engage in selection bias, but it’s still an interesting question for me)
  5. What is the maximum AUM you could put into this strategy?
  6. Would more/less rebalancing hurt/improve the model’s results?
  7. What is the minimum diversification (number of portfolio positions) needed to implement QV effectively?
  8. Is QV “businesslike” in the Benjamin Graham-sense?
  9. How is margin of safety defined and calculated according to the QV approach?
  10. What is the best way for an individual retail investor to approximate the QV strategy?

There’s also a companion website for the book available at: www.wiley.com/go/quantvalue

Conclusion

I like this book. A lot. As a “value guy”, you always like being able to put something like this down and make a witty quip about how it qualifies as a value investment, or it’s intrinsic value is being significantly discounted by the market, or what have you. I’ve only scratched the surface here in my review, there’s a ton to chew on for anyone who delves in and I didn’t bother covering the numerous charts, tables, graphs, etc., strewn throughout the book which serve to illustrate various concepts and claims explored.

I do think this is heady reading for a value neophyte. And I am not sure, as a small individual investor, how suitable all of the information, suggestions and processes contained herein are for putting into practice for myself. Part of that is because it’s obvious that to really do the QV strategy “right”, you need a powerful and pricey datamine and probably a few codemonkeys and PhDs to help you go through it efficiently. The other part of it is because it’s clear that the authors were really aiming this book at academic and professional/institutional audiences (people managing fairly sizable portfolios).

As much as I like it, though, I don’t think I can give it a perfect score. It’s not that it needs to be perfect, or that I found something wrong with it. I just reserve that kind of score for those once-in-a-lifetime classics that come along, that are infinitely deep and give you something new each time you re-read them and which you want to re-read, over and over again.

Quantitative Value is good, it’s worth reading, and I may even pick it up, dust it off and page through it now and then for reference. But I don’t think it has the same replay value as Security Analysis or The Intelligent Investor, for example.

Notes – How Did I Come Up With My 16 JNets? (#JNets, #NCAV)

A couple days ago someone who follows my Twitter feed asked me what criteria I had used to pick the 16 JNets I talked about in a recent post. He referenced that there were “300+” Japanese companies trading below their net current asset value. A recent post by Nate Tobik over at Oddball Stocks suggests that there are presently 448 such firms, definitely within the boundaries of the “300+” comment.

To be honest, I have no idea how many there are currently, nor when I made my investments. The reason is that I am not a professional investor with access to institution-grade screening tools like Bloomberg or CapitalIQ. Because of this, my investment process in general, but specifically with regards to foreign equities like JNets, relies especially on two principles:

  • Making do with “making do”; doing the best I can with the limited resources I have within the confines of the time and personal expertise I have available
  • “Cheap enough”; making a commitment to buy something when it is deemed to be cheap enough to be worthy of consideration, not holding out until I’ve examined every potential opportunity in the entire universe or local miniverse of investing

That’s kind of the 32,000-ft view of how I arrived at my 16 JNets. But it’s a good question and it deserves a specific answer, as well, for the questioner’s sake and for my own sake in keeping myself honest, come what may. So, here’s a little bit more about how I made the decision to add these 16 companies to my portfolio.

The first pass

The 16 companies I invested in came from a spreadsheet of 49 companies I gathered data on. Those 49 companies came from two places.

The first place, representing a majority of the companies that ultimately made it to my spreadsheet of 49, was a list of 100 JNets that came from a Bloomberg screen that someone else shared with Nate Tobik. To this list Nate added five columns, to which each company was assigned a “1” for yes or a “0” for no, with category headings covering whether the company showed a net profit in each of the last ten years, whether the company showed positive EBIT in each of the last ten years, whether the company had debt, whether the company paid a dividend and whether the company had bought back shares over the last ten years. Those columns were summed and anything which received a “4” or “5” cumulative score made it onto my master spreadsheet for further investigation.

The second place I gathered ideas from were the blogs of other value investors such as Geoff Gannon and Gurpreet Narang (Neat Value). I just grabbed everything I found and threw it on my list. I figured, if it was good enough for these investors it was worth closer examination for me, too.

The second pass

Once I had my companies, I started building my spreadsheet. First, I listed each company along with its stock symbol in Japan (where securities are quoted by 4-digit numerical codes). Then, I added basic data about the shares, such as shares outstanding, share price, average volume (important for position-sizing later on), market capitalization, current dividend yield.

After this, I listed important balance sheet data: cash (calculated as cash + ST investments), receivables  inventory, other current assets, total current assets, LT debt and total liabilities and then the NCAV and net cash position for each company. Following this were three balance sheet price ratios, Market Cap/NCAV, Market Cap/Net Cash and Market Cap/Cash… the lower the ratio, the better. While Market Cap/Net Cash is a more conservative valuation than Market Cap/NCAV, Market Cap/Cash is less conservative but was useful for evaluating companies which were debt free and had profitable operations– some companies with uneven operating outlooks are best valued on a liquidation basis (NCAV, Net Cash) but a company that represents an average operating performance is more properly considered cheap against a metric like the percent of the market cap composing it’s balance sheet cash, assuming it is debt free.

I also constructed some income metric columns, but before I could do this, I created two new tabs, “Net Inc” and “EBIT”, and copied the symbols and names from the previous tab over and then recorded the annual net income and EBIT for each company for the previous ten years. This data all came from MSN Money, like the rest of the data I had collected up to that point.

Then I carried this info back to my original “Summary” tab via formulas to calculate the columns for 10yr average annual EBIT, previous year EBIT, Enterprise Value (EV), EV/EBIT (10yr annual average) and EV/EBIT (previous year), as well as the earnings yield (10yr annual average net income divided by market cap) and the previous 5 years annual average as well to try to capture whether the business had dramatically changed since the global recession.

The final step was to go through my list thusly assembled and color code each company according to the legend of green for a cash bargain, blue for a net cash bargain and orange for an NCAV bargain (strictly defined as a company trading for 66% of NCAV or less; anything 67% or higher would not get color-coded).

I was trying to create a quick, visually obvious pattern for recognizing the cheapest of the cheap, understanding that my time is valuable and I could always go dig into each non-color coded name individually looking for other bargains as necessary.

The result, and psychological bias rears it’s ugly head

Looking over my spreadsheet, about 2/3rds of the list were color-coded in this way with the remaining third left white. The white entries are not necessarily not cheap or not companies trading below their NCAV– they were just not the cheapest of the cheap according to three strict criteria I used.

After reviewing the results, my desire was to purchase all of the net cash stocks (there were only a handful), all of the NCAVs and then as many of the cash bargains as possible. You see, this was where one of the first hurdles came in– how much of my portfolio I wanted to devote to this strategy of buying JNets. I ultimately settled upon 20-25% of my portfolio, however, that wasn’t the end of it.

Currently, I have accounts at several brokerages but I use Fidelity for a majority of my trading. Fidelity has good access to Japanese equity markets and will even let you trade electronically. For electronic trades, the commission is Y3,000, whereas a broker-assisted trade is Y8,000. I wanted to try to control the size of my trading costs relative to my positions by placing a strict limit of no more than 2% of the total position value as the ceiling for commissions. Ideally, I wanted to pay closer to 1%, if possible. The other consideration was lot-sizes. The Japanese equity markets have different rules than the US in terms of lot-sizes– at each price range category there is a minimum lot size and these lots are usually in increments of 100, 1000, etc.

After doing the math I decided I’d want to have 15-20 different positions in my portfolio. Ideally, I would’ve liked to own a lot more, maybe even all of them similar to the thinking behind Nate Tobik’s recent post on Japanese equities over at Oddball Stocks. But I didn’t have the capital for that so I had to come up with some criteria, once I had decided on position-sizing and total number of positions, for choosing the lucky few.

This is where my own psychological bias started playing a role. You see, I wanted to just “buy cheap”– get all the net cash bargains, then all the NCAVs, then some of the cash bargains. But I let my earnings yield numbers (calculated for the benefit of making decisions about some of the cash bargain stocks) influence my thinking on the net cash and NCAV stocks. And then I peeked at the EBIT and net income tables and got frightened by the fact that some of these companies had a loss year or two, or had declining earnings pictures.

I started second-guessing some of the choices of the color-coded bargain system. I began doing a mish-mash of seeking “cheap” plus “perceived quality.” In other words, I may have made a mistake by letting heuristics get in the way of passion-less rules. According to some research spelled out in an outstanding whitepaper by Toby Carlisle, the author of Greenbackd.com, trying to “second guess the model” like this could be a mistake.

Cheap enough?

Ultimately, this “Jekyll and Hyde” selection process led to my current portfolio of 16 JNets. Earlier in this post I suggested that one of my principles for inclusion was that the thing be “cheap enough”. Whether I strictly followed the output of my bargain model, or tried to eyeball quality for any individual pick, every one of these companies I think meets the general test of “cheap enough” to buy for a diversified basket of similar-class companies because all are trading at substantial discounts to their “fair” value or value to a private buyer of the entire company. What’s more, while some of these companies may be facing declining earnings prospects, at least as of right now every one of these companies are currently profitable on an operational and net basis, and almost all are debt free (with the few that have debt finding themselves in a position where the debt is a de minimis value and/or covered by cash on the balance sheet). I believe that significantly limits my risk of suffering a catastrophic loss in any one of these names, but especially in the portfolio as a whole, at least on a Yen-denominated basis.

Of course, my currency risk remains and currently I have not landed on a strategy for hedging it in a cost-effective and easy-to-use way.

I suppose the only concern I have at this point is whether my portfolio is “cheap enough” to earn me outsized returns over time. I wonder about my queasiness when looking at the uneven or declining earnings prospects of some of these companies and the way I let it influence my decision-making process and second-guess what should otherwise be a reliable model for picking a basket of companies that are likely to produce above-average returns over time. I question whether I might have eliminated one useful advantage (buying stuff that is just out and out cheap) by trying to add personal genius to it in thinking I could take in the “whole picture” better than my simple screen and thereby come up with an improved handicapping for some of my companies.

Considering that I don’t know Japanese and don’t know much about these companies outside of the statistical data I collected and an inquiry into the industry they operate in (which may be somewhat meaningless anyway in the mega-conglomerated, mega-diversified world of the Japanese corporate economy), it required great hubris, at a minimum, to think I even had cognizance of a “whole picture” on which to base an attempt at informed judgment.

But then, that’s the art of the leap of faith!