The financial services industry is the perfect example of one where information asymmetry between business and consumers means that consumers are easily duped by those who might try and take advantage of them. This information asymmetry is exacerbated by a vast array of bewildering technical jargon has sprung up, jargon which, not accidentally, makes it even more difficult for people to understand what is happening to their money. Unfortunately, it's also an industry that folks are all but compelled to interact with; after all, as everyone knows, we all must prepare for retirement, and that ultimately means saving and investment.
These facts were made painfully evident to me, recently, as I heard the story of someone who was duped by an FS Financial employee to engage in a dubious investment strategy that is likely to cost them thousands of dollars or more while enriching that employee.
So, below, I'm going to try and cut through a bit of the bullshit and explain what happened in this particular case and, along the way, hopefully illustrate some of the obvious ways that folks might get screwed by unscrupulous investment advisers.
Mutual Funds and deferred-sale charges
Have you ever wondered why the investment adviser exists? Why do these people do this job? How do they get paid?
Well, let's start with the basic idea of an RRSP. An RRSP is essentially a registered account. When you "contribute" money into that account, what you're actually doing is buying shares in some sort of investment vehicle. The account is then where those shares are held.
To make the business worthwhile, the investment adviser then collect some sort of commission.
These commissions can be taken as a percentage of the invested amount up-front, often up to 5% of the invested amount. But far more common, these days, is something called a "deferred-sale charge". In a DSC setup, the mutual fund provider pays the commission to the investment adviser instead of taking it out of the amount of money being invested.
Sounds great, right?
Well, there's a kicker: If you try to take that money back out of the fund, there's a penalty! In the first year, that fee can be as much as 6, 7, or even 8% of the amount being withdrawn, then declining each year the money is held in the fund until the fee reaches zero, usually 6-8 years out.
Now, if you're investing long and you can tolerate some risk of the fund doing poorly, this is fine. But if, for any reason, you need to liquidate your holdings (say, to put a down payment on a home), this can turn into many thousands of dollars in penalties.
Of course, this ain't news. The DSC is a scourge on the industry and has been reported in many places and at many times.
But it gets interesting when you add something else to the mix…
Woah, wait, "leveraged"? What the hell does that mean?
Well, it's more needless technical jargon. All it means is "borrowing money and then investing it".
That should immediately make anyone nervous. Unfortunately, I suspect that the mortgage industry has trained people to think this is normal. How so? Well, folks think of a house as an investment.
They shouldn't, but they do.
Viewed this way, a mortgage is actually a leveraged investment, as opposed to a lifestyle purchase decision like a car or a boat.
Now, in the case of leveraged investing, the basic idea is incredibly simple:
- Borrow money using an investment loan.
- Invest that borrowed money in the market in a non-registered account.
- Pay only the interest on the investment loan.
There's a couple of reasons why this might be appealing.
First, the interest paid on the investment loan is tax deductible. This means that investors who've hit their RRSP contribution limit can continue to purchase shares in a non-registered account while realizing some of the immediate tax advantages.
Second, lump some investing like this typically out-performs traditional investing for obvious reasons: there's more time for compound interest to apply to the full amount.
Sounds great, right?
Well, leveraged investing and mortgages share an important trait: in both cases, if the value of the asset purchased with debt drops lower than the borrowed amount, the borrower finds themselves "under water".
We saw this in the 2008 crash, when millions of people found that the value of their home dropped below the value of the mortgage. If you were a house flipper, this suddenly meant you were forced to default on your loan, due to the inability to pay it back. The result was a rash of bankruptcy filings and mortgage defaults.
When it comes to leveraged investing, this means that, if the value of the investment drops below the amount of the loan, and the investor then finds themselves in the position where they need to liquidate their holdings-for example, if they lose their job and can no longer afford the interest payments on the loan-they'll find themselves unable to pay back the loan.
As a result, leveraged investing carries with it a great deal of risk, and is absolutely not something for the faint of heart.
FS Financial - Worst of both worlds
So, what happened to this person I know?
Well, they were sold on the idea of borrowing a significant amount of money and purchasing shares in a mutual fund with, yup, you guessed it, a deferred-sales charge.
This greatly compounds their financial risk, because it means that if their investment declines below the value of the loan and they decide they want to liquidate their holdings, they'll then have to pay a significant penalty.
Moreover, the jackass investment "adviser" who recommended this strategy did so at a time when markets are looking increasingly shaky, with stock values currently at a peak while the Canadian, Australian, and BRIC nations have begun to stumble. This makes it very likely that, in the short run, a) the value of those investments will decline, and b) there's a non-trivial chance of a lay-off due to a weak local economy.
But it gets worse.
Naturally, this person asked the investment adviser if there would be any risk associated with the value of the investment dropping below the value of the loan, to which the adviser told them that no, they would not be responsible for those losses.
That seems like a lie on it's face, but it turns out it's not. They were simply answering a different question.
A detour into margin loans
Oh god, more terminology, I know.
So, remember when I mentioned an "investment loan" earlier? Well, there's another kind of loan out there that can be used for leveraged investment and it's called a "margin loan".
A margin loan is designed to reduce the bank's risk that the investor could find themselves under water. Part of the loan's terms is a loan-to-value ratio, basically the ratio of the value of the loan to the value of the investment. If that ratio drops below a certain level (due to the investment losing value), the investor must sell some of their investment to pay back part of the loan.
This is referred to as a "margin call".
As you can imagine, if you're a leveraged investor with a margin loan, if things go bad you may be forced to sell some of your position and realize immediate losses. This adds to the investor risk of leveraging but it means the bank doesn't take on as much risk of the investor defaulting.
Back to FS Financial
So what our jackass investment "adviser" was actually saying was that our investor would not be subject to a margin call if the investment declines in value. This is technically true because the loan was a regular ol' interest-only investment loan and not a margin loan.
But that wasn't the damn question.
This does not mean that the investor isn't responsible for paying back the loan and realizing losses if they can't repay by selling off their mutual fund. They absolutely are.
Was the adviser lying or just stupid? I have to assume the former.
One more thing
Looking at the communication between the subject of this blog post and their FS Financial jackass investment "advisor", I noticed something very interesting: they referred to the interest payments on the investment loan as "contributions".
Except they're nothing of the kind.
Remember, when leveraging to invest, you have a loan. In this particular structure, the investor then pays just the interest on that loan every month. But the "contribution", that is the purchase of the mutual fund shares themselves, is done at the time the loan is granted and the purchase executed.
Every payment after that is just servicing the loan.
So when, later, this person indicated they wanted to "increase their contributions", thinking of this like a traditional RRSP structure, the FS Financial jackass of course agreed! So they took the amount that the "contribution" was to be increased, calculated how big of a loan would be required to increase interest payments that amount, and promptly got the loan approved and the investment purchased.
It's a clever little twist of language designed to do one thing: confuse the investor into forgetting that what they were actually doing was just increasing their leveraged position, and their associated risk.
When you consider what actually happened here: the investor got a loan and bought an investment, then paid interest, and was subject to fees on early exit, it's all actually incredibly simple. The complexity is all in the jargon.
So when thinking about this stuff, always try to peel away the language and get down to the nuts and bolts of what's going on. What are you paying? What are your obligations? What can you do if you need to exit the deal? These are basic questions that a shady financial guy might not want to answer. So the job of the investor is to cut through the BS to get the answers they need.
And if all else fails, remember: you can always walk away. If your financial services guy seems to be dazzling you with jargon or giving you the run around, leave. Remember: you're not stupid just because you don't understand their lingo. A good, smart adviser will know how to explain these concepts in a way you'll understand, because ultimately, this stuff ain't that complicated. If they don't do that, or can't, they're not worth your time.
So in the kickoff post of my series on data structures and algorithms I'd like to begin with a relatively simple but handy little data structure: the trie. If you want to jump ahead and look at a very simplistic implementation of a trie data structure (only the insert and dump operations have been completed), I've put my experimental code up on GitHub here.
A clever little play on the word re*trie*val (though I, and many others, insist on pronouncing it "try"… suck it etymology), a trie is a key-value store represented as an n-ary tree, except that unlike a typical key-value store, no one node stores the key itself. Instead, the path to the value within the tree is defined by the characters/bits/what-have-you that define the key itself. Yeah, that's pretty abstract, why don't we just look at an example:
In this construction I've chosen the following set of keys:
As you can see, each character in the key is used to label an edge in the tree, while the nodes store the values associated with that key (note, in this example I've chosen to use the keys as values as well… this entirely artificial, and a bit confusing. Just remember, those values could be absolutely anything.) 1 Typically these keys are strings, as depicted here, although it's entirely possible to build a bit-wise trie that can be keyed off of arbitrary strings of bits. To find the value for a key, you take each character and, starting with the root node, transition through the graph until the target node is found. Or, as pseudo-code:
find(root_node, key): current_node = root_node current_key = key
while (current_key.length > 0): character = current_key
if current_node.has_edge_for(character): current_node = current_node.get_get_for(character).endpoint else throw "ERMAGERD"
Strangely, a very similar algorithm can be used for both inserts and deletes.
Some Interesting Properties
The trie offers a number of interesting advantages over traditional key-value stores such as hash tables and binary search trees:
- As mentioned previously, they have the peculiar feature that inserts, deletes, and lookups use very similar codepaths, and thus have very similar performance characteristics. As such, in applications where these operations are performed with equal frequency, the trie can provide better overall performance than other more traditional key-value stores.
- Lookup performance is a factor of key length as opposed to key distribution or dataset size. As such, for lookups they often outperform both hash tables and BSTs.
- They are quite space efficient for short keys, as key prefixes are shared between edges, resulting in compression of the graph.
- They enable longest-prefix matching. Given a candidate key, a trie can be used to perform a closest fit search with the same performance as an exact search.
- Pre-order traversal of the graph generates an ordered list of the keys (in fact, this implementation is a form of radix sort).
- Unlike hashes, there's no need to design a hash function, and collisions can only occur if identical keys are inserted multiple times.
Because tries are well-suited to fuzzy matching algorithms, they often see use in spell checking implementations or other areas involving fuzzy matching against a dictionary. In addition, the trie forms the core of Radix/PATRICIA and Suffix Trees, both of which are interesting enough to warrant separate posts of their own. Stay tuned!
1. Interestingly, if you looked at this example graph, you'd be forgiven for assuming it was an illustration of a finite state machine, with the characters in the key triggering transitions to deeper levels of the graph.
So, generally speaking, I've typically adhered to the rule that those who develop software should be aware of various classes of algorithms and data structures, but should avoid implementing them if at all possible. The reasoning here is pretty simple, and I think pretty common:
- You're reinventing the wheel. Stop that, we have enough wheels.
- You're probably reinventing it badly.
So just go find yourself the appropriate wheel to solve your problem and move on.
Ah, but there's a gotcha, here: Speaking for myself, I never truly understand an algorithm or data structure, both theoretically (ie, how it works in the abstract, complexity, etc) and practically (ie, how you'd actually implement the thing) until I try to implement it. After all, these things in the abstract can be tricky to grok, and when actually implemented you discover there's all kinds of details and edge cases that you need to deal with.
Now, I've spent a lot of my free time learning about programming languages (the tools of our trade that we use to express our ideas), and about software architecture and design, the "blueprints", if you will. But if languages are the tools and the architecture and design are the blueprints, algorithms and data structures are akin to the templates carpenters use for building doors, windows, etc. That is, they provide a general framework for solving various classes of problems that we as developers encounter day-to-day.
And, like a framer, day-to-day we may very well make use of various prefabbed components to get our jobs done more quickly and efficiently. But without understanding how and why those components are built the way they are, it can be very easy to misuse or abuse them. Plus, it can't hurt if, when someone comes along and asks you to show off your mad skillz, you can demonstrate your ability to build one of those components from scratch.
Consequently, I plan to kick off a round of posts wherein I explore various interesting algorithms and data structures that happen to catch my attention. So far I have a couple on the list that look interesting, either because I don't know them, or because it's been so long that I've forgotten them…
- Skip list
- Fibonacci heap
- Red-Black tree
- Radix/PATRICIA Tries
- Suffix Tries
- Bloom filter
- Various streaming algorithms (computations over read-once streams of data):
- Heavy hitters (finding elements that appear more often than a proscribed freqency)
- Counting distinct elements
- Computing entropy
- Topological sort
And I guarantee there's more that belong on this list, but this is just an initial roadmap… assuming I follow through, anyway.
Using Git to push changes upstream to servers is incredibly handy. In essence, you set up a bare repository on the target server, configure git to use the production application path as the git working directory, and then set up hooks to automatically update the working directory when changes are pushed into the repository. The result is dead easy code deployment, as you can simply push from your repository to the remote on the server.
But making this work when the Git repository is being hosted on Windows is a bit tricky. Normally ssh is the default transport for git, but making that work on Windows is an enormous pain. As such, this little writeup assumes the use of HTTP as the transport protocol.
So, first up we need to install a couple components:
Note: When installing msysgit, make sure to select the option that installs git in your path! After installation the system path should include the following1:
C:\Program Files\Git\cmd;C:\Program Files\Git\bin;C:\Program Files\Git\libexec\git-core
Now, in addition, we'll be using git-http-backend to serve up our repository, and it turns out the msysgit installation of this tool is broken such that one of its required DLLs is not in the directory where it's installed. As such, you need to copy:
Once you have the software installed, create your bare repository by firing up Git Bash and running something like:
$ mkdir -p /c/git/project.git $ cd /c/git/project.git $ git init --bare $ git config core.worktree c:/path/to/webroot $ git config http.receivepack true $ touch git-daemon-export-ok
Those last three commands are vital and will ensure that we can push to the repository, and that the repository uses our web root as the working tree.
Next up, add the following lines to your httpd.conf:
SetEnv GIT_PROJECT_ROOT c:/git/
ScriptAlias /git/ "C:/Program Files/Git/libexec/git-core/git-http-backend.exe/"
<Directory "C:/Program Files/Git/libexec/git-core/"> Options +ExecCGI FollowSymLinks Allow From All </Directory>
Note, I've omitted any security, here. You'll probably want to enable some form of HTTP authentication.
In addition, in order to make hooks work, you need to reconfigure the Apache daemon to run as a normal user. Obviously this user should have permissions to read from/write to the git repository folder and web root.
Oh, and last but not least, don't forget to restart Apache at this point.
Pushing the Base Repository
So, we now have our repository exposed, let's try to push to it. Assuming you have an already established repository ready to go and it's our master branch we want to publish, we just need to do a:
git remote add server http://myserver/git/project.git git push server master
In theory, anyway.
Note: After the initial push, in at least one instance I've found that "logs/refs" wasn't present in the server bare repository. This breaks, among other things, git stash. To remedy this I simply created that folder manually.
Lastly, you can pop over to your server, fire up Git Bash, and:
$ cd /c/git/project.git $ git checkout master
So, about those hooks. I use two, one that triggers before a new update comes to stash any local changes, and then another after a pack is applied to update the working tree and then unstash those local changes. The first is a pre-receive hook:
cd `git config --get core.worktree` git stash save --include-untracked
The second is a post-update hook:
cd `git config --get core.worktree`
git checkout -f git reset --hard HEAD git stash pop
Obviously you can do whatever you want, here. This is just something I slapped together for a test server I was working with.
1. Obviously any paths, here, would need to be tweaked on a 64-bit server with a 32-bit Git.
So, out of a certainly level of idle curiosity, a few months back I decided to contact my community league1 to find out what would be involved in getting a Wikipedia:Community garden started in my area. Community gardens are, to me, an intriguing concept: get access to some land (either city property or donated private property), get members of the local community together, and then grow food! Of course, it's particularly interesting to me as a guy who's always lived in a small house with little to no room for a garden, leaving a community garden as the only option I'd have to get access to a decent sized plot of land. And I suspect, deep down, I'm actually a closet hippy yearning for a commune…
Of course, there's no shortage of community gardens in the city, but gaining access to them can be tough, and none are particularly close to my home. Meanwhile, I live along a rather large hydro corridor, which means a ton of seemingly under-utilized greenspace, in a neighbourhood dominated by small homes with tiny yards, or high density residential in the form of three-story condo blocks who, needless to say, have no yard at all. So it would seem like the kind of area where a community garden would flourish.
And so I emailed my local community league, and then promptly put the whole idea out of my mind. I tend to have a short attention span like that. So colour me surprised when a few weeks later I received a reply from the current league webmaster indicating that she'd be very happy to bring the idea to the league board… she just had one question: would I be willing to take point on this project?
And it may be totally crazy, but… I said yes. So, she'll be bringing the topic up to the board this week, and all signs indicate that they'll provide their support, which means the ball may actually start rolling on this.
1. Fun fact: community leagues in Edmonton are quite powerful compared to similar organizations in other cities (Edmonton was also the first city in Canada to adopt these kinds of organizations). If you want to have an influence on politics in your area, the two most important things you could possibly do are a) vote for your city councillor, and b) get involved in your community league, as they typically handle park development (including skating rinks, playgrounds, and so forth), manage local community programs, and get involved in land use and transportation issues.
So, a couple years back I started doing some subcontracting work for a buddy of mine who runs a little ColdFusion consultancy. As part of that work, I took ownership of one of the projects another sub had built for one of his client, and the experience has been… interesting.
See, like PHP and Perl, ColdFusion has the wonderful property of making it very easy for middling developers to write truly awful code that, ultimately, gets the job done. And so it is with this project. My predecessor was, to be complementary, one of those middling developers. The codebase, itself, is a total mess. Like, if there was a digital version of Hoarders, this code might be on it. But, it does get the job done, and ultimately, when it comes to customers, that's what matters (well, until the bugs start rolling in).
Of course, as a self-respecting(-ish) developer, this is a nightmare. In the beginning, I dreaded modifying the code. Duplication is rampant, meaning a fix in one place may need to be done in many. Side effects are ubiquitous, so it's difficult to predict the results of a change. Even simple things like consistent indentation are nowhere to be found. And don't even dream of anything like automated regression tests.
Worse, feeling no ownership of the code, my strategy was to minimally disturb the code as it existed while implementing new features or bug fixes, which meant the status quo remained. Fortunately, around a year ago I finally got over this last hump and made the decision to gradually start modernizing the code. And that's where things got fun.
One of the biggest problems with this code is that data access and business logic are littered throughout the code, with absolutely no separation between data and views. And, remember, it's duplicated. Often. So the first order of business? Build a real data access layer, and do it such that the new code could live beside the old. Of course, this last requirement was fairly easy since there was no pre-existing data access layer to live beside…
So, in the last year, I've built at least a dozen CFCs that, slowly but surely, are beginning to encompass large portions of the (thankfully fairly simple) data model and attendant business logic. Then, as I've implemented new features or fixed bugs, I've migrated old business logic into the new data access layer and then updated old code to use the new object layer. Gradually, the old code is eroding away. Very gradually.
Finally, after a year of this, after chipping away and chipping away, finally, while there's still loads of legacy code kicking around (including a surprising amount of simply dead code… apparently my predecessor didn't understand how version control systems work--if you want to remove code, remove it, don't comment it out!), the tide is slowly starting to turn. More and more often, bugs that need to be fixed are getting fixed in one place. New features are able to leverage the object layer, cutting down development time and bugs. And some major new features coming down the pipe will be substantially easier to build with this new infrastructure in place. It's really incredibly satisfying, in a god-damn-this-is-how-it-should-be sort of way.
The funny thing is, this kind of approach goes very much against my natural instincts. Conservative by nature, I'm often the last person to start rewriting code. However, if there's one thing this project has taught me (along with a couple wonderfully excited, eager co-workers), it's that sometimes you really do have to gut the basement to fix the cracks in the foundation. And sometimes, you just gotta tear the whole house down.
Well, yet another long blogging hiatus. So what's so important that I would take the time to author yet another scintillating installment? Why, a knitting project, of course!
Some good friends of ours are expecting, and as I often do, another baby blanket is thus in the queue. This one, however, is a bit unique, in that the mother has a very specific, and I think fairly awesome, request: she wants a Tux blanket.
Of course, with some video game knitting experience behind me, throwing together a pattern for this is pretty straight forward:
- Knit a swatch to determine row and stitch gauge. This is really important, as this determines our pixel aspect ratio.
- Based on desired measurements, calculate the size of our canvas by multiplying the row and stitch gauges by the target width and height respectively (in this case, 36x48 inches for 162 x 288 pixels).
- Find decent source image.
- Scale image to fit into desired canvas and layout, making sure to take into account our aspect ratio!
- Apply "posterize" filter to limit to the number of colours in our knitting palette.
- Scale back up by 6-8 times, and use the Gimp's grid generator plugin to transform into a pattern.
Then, I just split the image into three pieces and printed out in portrait mode. Voila! Pattern complete!
Next step, actually knitting the thing (I've already got materials picked out).